Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Individualistic weight perception from motion on a slope
Zintus-art, K.; Shin, D.; Kambara, H.; Yoshimura, N.; Koike, Y.
2016-01-01
Perception of an object’s weight is linked to its form and motion. Studies have shown the relationship between weight perception and motion in horizontal and vertical environments to be universally identical across subjects during passive observation. Here we show a contradicting finding in that not all humans share the same motion-weight pairing. A virtual environment where participants control the steepness of a slope was used to investigate the relationship between sliding motion and weight perception. Our findings showed that distinct, albeit subjective, motion-weight relationships in perception could be identified for slope environments. These individualistic perceptions were found when changes in environmental parameters governing motion were introduced, specifically inclination and surface texture. Differences in environmental parameters, combined with individual factors such as experience, affected participants’ weight perception. This phenomenon may offer evidence of the central nervous system’s ability to choose and combine internal models based on information from the sensory system. The results also point toward the possibility of controlling human perception by presenting strong sensory cues to manipulate the mechanisms managing internal models. PMID:27174036
Norman, Joseph; Hock, Howard; Schöner, Gregor
2014-07-01
It has long been thought (e.g., Cavanagh & Mather, 1989) that first-order motion-energy extraction via space-time comparator-type models (e.g., the elaborated Reichardt detector) is sufficient to account for human performance in the short-range motion paradigm (Braddick, 1974), including the perception of reverse-phi motion when the luminance polarity of the visual elements is inverted during successive frames. Human observers' ability to discriminate motion direction and use coherent motion information to segregate a region of a random cinematogram and determine its shape was tested; they performed better in the same-, as compared with the inverted-, polarity condition. Computational analyses of short-range motion perception based on the elaborated Reichardt motion energy detector (van Santen & Sperling, 1985) predict, incorrectly, that symmetrical results will be obtained for the same- and inverted-polarity conditions. In contrast, the counterchange detector (Hock, Schöner, & Gilroy, 2009) predicts an asymmetry quite similar to that of human observers in both motion direction and shape discrimination. The further advantage of counterchange, as compared with motion energy, detection for the perception of spatial shape- and depth-from-motion is discussed.
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator
NASA Astrophysics Data System (ADS)
Rehmatullah, Faizan
In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.
Behavioural evidence for distinct mechanisms related to global and biological motion perception.
Miller, Louisa; Agnew, Hannah C; Pilz, Karin S
2018-01-01
The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Do rhesus monkeys (Macaca mulatta) perceive illusory motion?
Agrillo, Christian; Gori, Simone; Beran, Michael J
2015-07-01
During the last decade, visual illusions have been used repeatedly to understand similarities and differences in visual perception of human and non-human animals. However, nearly all studies have focused only on illusions not related to motion perception, and to date, it is unknown whether non-human primates perceive any kind of motion illusion. In the present study, we investigated whether rhesus monkeys (Macaca mulatta) perceived one of the most popular motion illusions in humans, the Rotating Snake illusion (RSI). To this purpose, we set up four experiments. In Experiment 1, subjects initially were trained to discriminate static versus dynamic arrays. Once reaching the learning criterion, they underwent probe trials in which we presented the RSI and a control stimulus identical in overall configuration with the exception that the order of the luminance sequence was changed in a way that no apparent motion is perceived by humans. The overall performance of monkeys indicated that they spontaneously classified RSI as a dynamic array. Subsequently, we tested adult humans in the same task with the aim of directly comparing the performance of human and non-human primates (Experiment 2). In Experiment 3, we found that monkeys can be successfully trained to discriminate between the RSI and a control stimulus. Experiment 4 showed that a simple change in luminance sequence in the two arrays could not explain the performance reported in Experiment 3. These results suggest that some rhesus monkeys display a human-like perception of this motion illusion, raising the possibility that the neurocognitive systems underlying motion perception may be similar between human and non-human primates.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Neck Proprioception Shapes Body Orientation and Perception of Motion
Pettorossi, Vito Enrico; Schieppati, Marco
2014-01-01
This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject’s mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes. PMID:25414660
Neck proprioception shapes body orientation and perception of motion.
Pettorossi, Vito Enrico; Schieppati, Marco
2014-01-01
This review article deals with some effects of neck muscle proprioception on human balance, gait trajectory, subjective straight-ahead (SSA), and self-motion perception. These effects are easily observed during neck muscle vibration, a strong stimulus for the spindle primary afferent fibers. We first remind the early findings on human balance, gait trajectory, SSA, induced by limb, and neck muscle vibration. Then, more recent findings on self-motion perception of vestibular origin are described. The use of a vestibular asymmetric yaw-rotation stimulus for emphasizing the proprioceptive modulation of motion perception from the neck is mentioned. In addition, an attempt has been made to conjointly discuss the effects of unilateral neck proprioception on motion perception, SSA, and walking trajectory. Neck vibration also induces persistent aftereffects on the SSA and on self-motion perception of vestibular origin. These perceptive effects depend on intensity, duration, side of the conditioning vibratory stimulation, and on muscle status. These effects can be maintained for hours when prolonged high-frequency vibration is superimposed on muscle contraction. Overall, this brief outline emphasizes the contribution of neck muscle inflow to the construction and fine-tuning of perception of body orientation and motion. Furthermore, it indicates that tonic neck-proprioceptive input may induce persistent influences on the subject's mental representation of space. These plastic changes might adapt motion sensitiveness to lasting or permanent head positional or motor changes.
The role of human ventral visual cortex in motion perception
Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene
2013-01-01
Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030
Neural representations of kinematic laws of motion: evidence for action-perception coupling.
Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar
2007-12-18
Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.
How long did it last? You would better ask a human
Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694
How long did it last? You would better ask a human.
Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
Disorders of motion and depth.
Nawrot, Mark
2003-08-01
Damage to the human homologue of area MT produces a motion perception deficit similar to that found in the monkey with MT lesions. Even temporary disruption of MT processing with transcranial magnetic stimulation can produce a temporary akinetopsia [127]. Motion perception deficits, however, also are found with a variety of subcortical lesions and other neurologic disorders that can best be described as causing a disconnection within the motion processing stream. The precise role of these subcortical structures, such as the cerebellum, remains to be determined. Simple motion perception, moreover, is only a part of MT function. It undoubtedly has an important role in the perception of depth from motion and stereopsis [112]. Psychophysical studies using aftereffects in normal observers suggest a link between stereo mechanisms and the perception of depth from motion [9-11]. There is even a simple correlation between stereo acuity and the perception of depth from motion [128]. Future studies of patients with cortical lesions will take a closer look at depth perception in association with motion perception and should provide a better understanding of how motion and depth are processed together.
Use of cues in virtual reality depends on visual feedback.
Fulvio, Jacqueline M; Rokers, Bas
2017-11-22
3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Human body perception and higher-level person perception are dissociated in early development.
Slaughter, Virginia
2011-01-01
Abstract Developmental data support the proposal that human body perceptual processing is distinct from other aspects of person perception. Infants are sensitive to human bodily motion and attribute goals to human arm movements before they demonstrate recognition of human body structure. The developmental data suggest the possibility of bidirectional linkages between EBA- and FBA-mediated representations and these higher-level elements of person perception.
Human Perception of Ambiguous Inertial Motion Cues
NASA Technical Reports Server (NTRS)
Zhang, Guan-Lu
2010-01-01
Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that the GIF resolution hypothesis is completely valid for non-rotational periodic motions. Additionally, human perception of translation is impaired without visual or spatial reference. The performance of ground-base subjects in estimating tilt after brief training is comparable with that of crewmembers without training.
Impaired Perception of Biological Motion in Parkinson’s Disease
Jaywant, Abhishek; Shiffrar, Maggie; Roy, Serge; Cronin-Golomb, Alice
2016-01-01
Objective We examined biological motion perception in Parkinson’s disease (PD). Biological motion perception is related to one’s own motor function and depends on the integrity of brain areas affected in PD, including posterior superior temporal sulcus. If deficits in biological motion perception exist, they may be specific to perceiving natural/fast walking patterns that individuals with PD can no longer perform, and may correlate with disease-related motor dysfunction. Method 26 non-demented individuals with PD and 24 control participants viewed videos of point-light walkers and scrambled versions that served as foils, and indicated whether each video depicted a human walking. Point-light walkers varied by gait type (natural, parkinsonian) and speed (0.5, 1.0, 1.5 m/s). Participants also completed control tasks (object motion, coherent motion perception), a contrast sensitivity assessment, and a walking assessment. Results The PD group demonstrated significantly less sensitivity to biological motion than the control group (p<.001, Cohen’s d=1.22), regardless of stimulus gait type or speed, with a less substantial deficit in object motion perception (p=.02, Cohen’s d=.68). There was no group difference in coherent motion perception. Although individuals with PD had slower walking speed and shorter stride length than control participants, gait parameters did not correlate with biological motion perception. Contrast sensitivity and coherent motion perception also did not correlate with biological motion perception. Conclusion PD leads to a deficit in perceiving biological motion, which is independent of gait dysfunction and low-level vision changes, and may therefore arise from difficulty perceptually integrating form and motion cues in posterior superior temporal sulcus. PMID:26949927
Motion coherence affects human perception and pursuit similarly.
Beutter, B R; Stone, L S
2000-01-01
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.
Motion coherence affects human perception and pursuit similarly
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
2000-01-01
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.
NASA Technical Reports Server (NTRS)
1974-01-01
The effect of motion on the ability of men to perform a variety of control actions was investigated. Special attention was given to experimental and analytical studies of the dynamic characteristics of the otoliths and semicircular canals using a two axis angular motion simulator and a one axis linear motion simulator.
NASA Technical Reports Server (NTRS)
Beutter, Brent R.; Stone, Leland S.
1997-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
1998-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.
Impaired visual recognition of biological motion in schizophrenia.
Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee
2005-09-15
Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
2017-02-19
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Adaptation aftereffects in the perception of gender from biological motion.
Troje, Nikolaus F; Sadr, Javid; Geyer, Henning; Nakayama, Ken
2006-07-28
Human visual perception is highly adaptive. While this has been known and studied for a long time in domains such as color vision, motion perception, or the processing of spatial frequency, a number of more recent studies have shown that adaptation and adaptation aftereffects also occur in high-level visual domains like shape perception and face recognition. Here, we present data that demonstrate a pronounced aftereffect in response to adaptation to the perceived gender of biological motion point-light walkers. A walker that is perceived to be ambiguous in gender under neutral adaptation appears to be male after adaptation with an exaggerated female walker and female after adaptation with an exaggerated male walker. We discuss this adaptation aftereffect as a tool to characterize and probe the mechanisms underlying biological motion perception.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
“What Women Like”: Influence of Motion and Form on Esthetic Body Perception
Cazzato, Valentina; Siega, Serena; Urgesi, Cosimo
2012-01-01
Several studies have shown the distinct contribution of motion and form to the esthetic evaluation of female bodies. Here, we investigated how variations of implied motion and body size interact in the esthetic evaluation of female and male bodies in a sample of young healthy women. Participants provided attractiveness, beauty, and liking ratings for the shape and posture of virtual renderings of human bodies with variable body size and implied motion. The esthetic judgments for both shape and posture of human models were influenced by body size and implied motion, with a preference for thinner and more dynamic stimuli. Implied motion, however, attenuated the impact of extreme body size on the esthetic evaluation of body postures, while body size variations did not affect the preference for more dynamic stimuli. Results show that body form and action cues interact in esthetic perception, but the final esthetic appreciation of human bodies is predicted by a mixture of perceptual and affective evaluative components. PMID:22866044
3D surface perception from motion involves a temporal–parietal network
Beer, Anton L.; Watanabe, Takeo; Ni, Rui; Sasaki, Yuka; Andersen, George J.
2010-01-01
Previous research has suggested that three-dimensional (3D) structure-from-motion (SFM) perception in humans involves several motion-sensitive occipital and parietal brain areas. By contrast, SFM perception in nonhuman primates seems to involve the temporal lobe including areas MT, MST and FST. The present functional magnetic resonance imaging study compared several motion-sensitive regions of interest including the superior temporal sulcus (STS) while human observers viewed horizontally moving dots that defined either a 3D corrugated surface or a 3D random volume. Low-level stimulus features such as dot density and velocity vectors as well as attention were tightly controlled. Consistent with previous research we found that 3D corrugated surfaces elicited stronger responses than random motion in occipital and parietal brain areas including area V3A, the ventral and dorsal intraparietal sulcus, the lateral occipital sulcus and the fusiform gyrus. Additionally, 3D corrugated surfaces elicited stronger activity in area MT and the STS but not in area MST. Brain activity in the STS but not in area MT correlated with interindividual differences in 3D surface perception. Our findings suggest that area MT is involved in the analysis of optic flow patterns such as speed gradients and that the STS in humans plays a greater role in the analysis of 3D SFM than previously thought. PMID:19674088
Video quality assessment using a statistical model of human visual speed perception.
Wang, Zhou; Li, Qiang
2007-12-01
Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.
Phase-linking and the perceived motion during off-vertical axis rotation.
Holly, Jan E; Wood, Scott J; McCollum, Gin
2010-01-01
Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates-slow (45 degrees /s) and fast (180 degrees /s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one's overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing "standard" model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model.
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.
Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik
2016-01-01
Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.
Neural Integration of Information Specifying Human Structure from Form, Motion, and Depth
Jackson, Stuart; Blake, Randolph
2010-01-01
Recent computational models of biological motion perception operate on ambiguous two-dimensional representations of the body (e.g., snapshots, posture templates) and contain no explicit means for disambiguating the three-dimensional orientation of a perceived human figure. Are there neural mechanisms in the visual system that represent a moving human figure’s orientation in three dimensions? To isolate and characterize the neural mechanisms mediating perception of biological motion, we used an adaptation paradigm together with bistable point-light (PL) animations whose perceived direction of heading fluctuates over time. After exposure to a PL walker with a particular stereoscopically defined heading direction, observers experienced a consistent aftereffect: a bistable PL walker, which could be perceived in the adapted orientation or reversed in depth, was perceived predominantly reversed in depth. A phase-scrambled adaptor produced no aftereffect, yet when adapting and test walkers differed in size or appeared on opposite sides of fixation aftereffects did occur. Thus, this heading direction aftereffect cannot be explained by local, disparity-specific motion adaptation, and the properties of scale and position invariance imply higher-level origins of neural adaptation. Nor is disparity essential for producing adaptation: when suspended on top of a stereoscopically defined, rotating globe, a context-disambiguated “globetrotter” was sufficient to bias the bistable walker’s direction, as were full-body adaptors. In sum, these results imply that the neural signals supporting biomotion perception integrate information on the form, motion, and three-dimensional depth orientation of the moving human figure. Models of biomotion perception should incorporate mechanisms to disambiguate depth ambiguities in two-dimensional body representations. PMID:20089892
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
Efficiencies for parts and wholes in biological-motion perception.
Bromfield, W Drew; Gold, Jason M
2017-10-01
People can reliably infer the actions, intentions, and mental states of fellow humans from body movements (Blake & Shiffrar, 2007). Previous research on such biological-motion perception has suggested that the movements of the feet may play a particularly important role in making certain judgments about locomotion (Chang & Troje, 2009; Troje & Westhoff, 2006). One account of this effect is that the human visual system may have evolved specialized processes that are efficient for extracting information carried by the feet (Troje & Westhoff, 2006). Alternatively, the motion of the feet may simply be more discriminable than that of other parts of the body. To dissociate these two possibilities, we measured people's ability to discriminate the walking direction of stimuli in which individual body parts (feet, hands) were removed or shown in isolation. We then compared human performance to that of a statistically optimal observer (Gold, Tadin, Cook, & Blake, 2008), giving us a measure of humans' discriminative ability independent of the information available (a quantity known as efficiency). We found that efficiency was highest when the hands and the feet were shown in isolation. A series of follow-up experiments suggested that observers were relying on a form-based cue with the isolated hands (specifically, the orientation of their path through space) and a motion-based cue with the isolated feet to achieve such high efficiencies. We relate our findings to previous proposals of a distinction between form-based and motion-based mechanisms in biological-motion perception.
ERIC Educational Resources Information Center
Wollner, Clemens; Deconinck, Frederik J. A.; Parkinson, Jim; Hove, Michael J.; Keller, Peter E.
2012-01-01
Aesthetic theories have long suggested perceptual advantages for prototypical exemplars of a given class of objects or events. Empirical evidence confirmed that morphed (quantitatively averaged) human faces, musical interpretations, and human voices are preferred over most individual ones. In this study, biological human motion was morphed and…
Sociability modifies dogs' sensitivity to biological motion of different social relevance.
Ishikawa, Yuko; Mills, Daniel; Willmott, Alexander; Mullineaux, David; Guo, Kun
2018-03-01
Preferential attention to living creatures is believed to be an intrinsic capacity of the visual system of several species, with perception of biological motion often studied and, in humans, it correlates with social cognitive performance. Although domestic dogs are exceptionally attentive to human social cues, it is unknown whether their sociability is associated with sensitivity to conspecific and heterospecific biological motion cues of different social relevance. We recorded video clips of point-light displays depicting a human or dog walking in either frontal or lateral view. In a preferential looking paradigm, dogs spontaneously viewed 16 paired point-light displays showing combinations of normal/inverted (control condition), human/dog and frontal/lateral views. Overall, dogs looked significantly longer at frontal human point-light display versus the inverted control, probably due to its clearer social/biological relevance. Dogs' sociability, assessed through owner-completed questionnaires, further revealed that low-sociability dogs preferred the lateral point-light display view, whereas high-sociability dogs preferred the frontal view. Clearly, dogs can recognize biological motion, but their preference is influenced by their sociability and the stimulus salience, implying biological motion perception may reflect aspects of dogs' social cognition.
Discriminating Rigid from Nonrigid Motion
1989-07-31
motion can be given a three-dimensional interpretation using a constraint of rigidity. Kruppa’s result and others (Faugeras & Maybank , 1989; Huang...Experimental Psychology: Human Perception and Performance, 10, 1-11. Faugeras, 0., & Maybank , S. (1989). Motion from point matches: multiplicity of
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
Default perception of high-speed motion
Wexler, Mark; Glennerster, Andrew; Cavanagh, Patrick; Ito, Hiroyuki; Seno, Takeharu
2013-01-01
When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This visually striking effect, which we call “high phi,” challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit called dmax. Our experiments with transients, such as texture randomization or contrast reversal, show that the magnitude of the jump depends on spatial frequency and transient duration—but not on the speed of the inducing motion signals—and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond the upper step size limit of dmax, a breakdown of coherent motion perception is expected; however, in the presence of an inducer, observers again perceive coherent displacements at or just above dmax. In summary, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion—as suggested by the minimal-motion principle—observers perceive jumps whose amplitude closely follows their own dmax limits. PMID:23572578
Phase-linking and the perceived motion during off-vertical axis rotation
Wood, Scott J.; McCollum, Gin
2010-01-01
Human off-vertical axis rotation (OVAR) in the dark typically produces perceived motion about a cone, the amplitude of which changes as a function of frequency. This perception is commonly attributed to the fact that both the OVAR and the conical motion have a gravity vector that rotates about the subject. Little-known, however, is that this rotating-gravity explanation for perceived conical motion is inconsistent with basic observations about self-motion perception: (a) that the perceived vertical moves toward alignment with the gravito-inertial acceleration (GIA) and (b) that perceived translation arises from perceived linear acceleration, as derived from the portion of the GIA not associated with gravity. Mathematically proved in this article is the fact that during OVAR these properties imply mismatched phase of perceived tilt and translation, in contrast to the common perception of matched phases which correspond to conical motion with pivot at the bottom. This result demonstrates that an additional perceptual rule is required to explain perception in OVAR. This study investigates, both analytically and computationally, the phase relationship between tilt and translation at different stimulus rates—slow (45°/s) and fast (180°/s), and the three-dimensional shape of predicted perceived motion, under different sets of hypotheses about self-motion perception. We propose that for human motion perception, there is a phase-linking of tilt and translation movements to construct a perception of one’s overall motion path. Alternative hypotheses to achieve the phase match were tested with three-dimensional computational models, comparing the output with published experimental reports. The best fit with experimental data was the hypothesis that the phase of perceived translation was linked to perceived tilt, while the perceived tilt was determined by the GIA. This hypothesis successfully predicted the bottom-pivot cone commonly reported and a reduced sense of tilt during fast OVAR. Similar considerations apply to the hilltop illusion often reported during horizontal linear oscillation. Known response properties of central neurons are consistent with this ability to phase-link translation with tilt. In addition, the competing “standard” model was mathematically proved to be unable to predict the bottom-pivot cone regardless of the values used for parameters in the model. PMID:19937069
The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-01-01
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987
Spering, Miriam; Carrasco, Marisa
2012-01-01
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238
Spering, Miriam; Carrasco, Marisa
2012-05-30
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.
Differential responses in dorsal visual cortex to motion and disparity depth cues
Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.
2013-01-01
We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808
A comparison of form processing involved in the perception of biological and nonbiological movements
Thurman, Steven M.; Lu, Hongjing
2016-01-01
Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875
Near-optimal integration of facial form and motion.
Dobs, Katharina; Ma, Wei Ji; Reddy, Leila
2017-09-08
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Spering, Miriam; Montagnini, Anna
2011-04-22
Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.
Global motion perception is associated with motor function in 2-year-old children.
Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E
2017-09-29
The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)
1995-01-01
We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.
No Evidence for Impaired Perception of Biological Motion in Adults with Autistic Spectrum Disorders
ERIC Educational Resources Information Center
Murphy, Patrick; Brady, Nuala; Fitzgerald, Michael; Troje, Nikolaus F.
2009-01-01
A central feature of autistic spectrum disorders (ASDs) is a difficulty in identifying and reading human expressions, including those present in the moving human form. One previous study, by Blake et al. (2003), reports decreased sensitivity for perceiving biological motion in children with autism, suggesting that perceptual anomalies underlie…
Priming with real motion biases visual cortical response to bistable apparent motion
Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming
2012-01-01
Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
Human Systems Integration and Automation Issues in Small Unmanned Aerial Vehicles
2004-10-01
display (HMD) bounce. Motion sickness occurs in these situations due to a combination of actual motion plus “ cybersickness ” (McCauley and Sharkey...Research Laboratory. McCauley, M.E. and Sharkey, T.J. (Summer 1992). Cybersickness : Perception of Self-Motion in Virtual Environments. Presence
The responsiveness of biological motion processing areas to selective attention towards goals.
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-10-15
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. Copyright © 2012 Elsevier Inc. All rights reserved.
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
Methodology for estimating human perception to tremors in high-rise buildings
NASA Astrophysics Data System (ADS)
Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien
2017-07-01
Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.
Time perception of visual motion is tuned by the motor representation of human actions
Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry
2013-01-01
Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903
The neurophysiology of biological motion perception in schizophrenia
Jahshan, Carol; Wynn, Jonathan K; Mathis, Kristopher I; Green, Michael F
2015-01-01
Introduction The ability to recognize human biological motion is a fundamental aspect of social cognition that is impaired in people with schizophrenia. However, little is known about the neural substrates of impaired biological motion perception in schizophrenia. In the current study, we assessed event-related potentials (ERPs) to human and nonhuman movement in schizophrenia. Methods Twenty-four subjects with schizophrenia and 18 healthy controls completed a biological motion task while their electroencephalography (EEG) was simultaneously recorded. Subjects watched clips of point-light animations containing 100%, 85%, or 70% biological motion, and were asked to decide whether the clip resembled human or nonhuman movement. Three ERPs were examined: P1, N1, and the late positive potential (LPP). Results Behaviorally, schizophrenia subjects identified significantly fewer stimuli as human movement compared to healthy controls in the 100% and 85% conditions. At the neural level, P1 was reduced in the schizophrenia group but did not differ among conditions in either group. There were no group differences in N1 but both groups had the largest N1 in the 70% condition. There was a condition × group interaction for the LPP: Healthy controls had a larger LPP to 100% versus 85% and 70% biological motion; there was no difference among conditions in schizophrenia subjects. Conclusions Consistent with previous findings, schizophrenia subjects were impaired in their ability to recognize biological motion. The EEG results showed that biological motion did not influence the earliest stage of visual processing (P1). Although schizophrenia subjects showed the same pattern of N1 results relative to healthy controls, they were impaired at a later stage (LPP), reflecting a dysfunction in the identification of human form in biological versus nonbiological motion stimuli. PMID:25722951
Studies of human dynamic space orientation using techniques of control theory
NASA Technical Reports Server (NTRS)
Young, L. R.
1974-01-01
Studies of human orientation and manual control in high order systems are summarized. Data cover techniques for measuring and altering orientation perception, role of non-visual motion sensors, particularly the vestibular and tactile sensors, use of motion cues in closed loop control of simple stable and unstable systems, and advanced computer controlled display systems.
Exhibition of stochastic resonance in vestibular tilt motion perception.
Galvan-Garza, R C; Clark, T K; Mulavara, A P; Oman, C M
2018-04-03
Stochastic Resonance (SR) is a phenomenon broadly described as "noise benefit". The application of subsensory electrical Stochastic Vestibular Stimulation (SVS) via electrodes behind each ear has been used to improve human balance and gait, but its effect on motion perception thresholds has not been examined. This study investigated the capability of subsensory SVS to reduce vestibular motion perception thresholds in a manner consistent with a characteristic bell-shaped SR curve. We measured upright, head-centered, roll tilt Direction Recognition (DR) thresholds in the dark in 12 human subjects with the application of wideband 0-30 Hz SVS ranging from ±0-700 μA. To conservatively assess if SR was exhibited, we compared the proportions of both subjective and statistical SR exhibition in our experimental data to proportions of SR exhibition in multiple simulation cases with varying underlying SR behavior. Analysis included individual and group statistics. As there is not an established mathematical definition, three humans subjectively judged that SR was exhibited in 78% of subjects. "Statistically significant SR exhibition", which additionally required that a subject's DR threshold with SVS be significantly lower than baseline (no SVS), was present in 50% of subjects. Both percentages were higher than simulations suggested could occur simply by chance. For SR exhibitors, defined by subjective or statistically significant criteria, the mean DR threshold improved by -30% and -39%, respectively. The largest individual improvement was -47%. At least half of the subjects were better able to perceive passive body motion with the application of subsensory SVS. This study presents the first conclusive demonstration of SR in vestibular motion perception. Copyright © 2018 Elsevier Inc. All rights reserved.
Shared sensory estimates for human motion perception and pursuit eye movements.
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C
2015-06-03
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.
Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio
2015-01-01
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919
Shared motion signals for human perceptual decisions and oculomotor actions
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Krauzlis, Richard J.
2003-01-01
A fundamental question in primate neurobiology is to understand to what extent motor behaviors are driven by shared neural signals that also support conscious perception or by independent subconscious neural signals dedicated to motor control. Although it has clearly been established that cortical areas involved in processing visual motion support both perception and smooth pursuit eye movements, it remains unknown whether the same or different sets of neurons within these structures perform these two functions. Examination of the trial-by-trial variation in human perceptual and pursuit responses during a simultaneous psychophysical and oculomotor task reveals that the direction signals for pursuit and perception are not only similar on average but also co-vary on a trial-by-trial basis, even when performance is at or near chance and the decisions are determined largely by neural noise. We conclude that the neural signal encoding the direction of target motion that drives steady-state pursuit and supports concurrent perceptual judgments emanates from a shared ensemble of cortical neurons.
Perception of animacy in dogs and humans.
Abdai, Judit; Ferdinandy, Bence; Terencio, Cristina Baño; Pogány, Ákos; Miklósi, Ádám
2017-06-01
Humans have a tendency to perceive inanimate objects as animate based on simple motion cues. Although animacy is considered as a complex cognitive property, this recognition seems to be spontaneous. Researchers have found that young human infants discriminate between dependent and independent movement patterns. However, quick visual perception of animate entities may be crucial to non-human species as well. Based on general mammalian homology, dogs may possess similar skills to humans. Here, we investigated whether dogs and humans discriminate similarly between dependent and independent motion patterns performed by geometric shapes. We projected a side-by-side video display of the two patterns and measured looking times towards each side, in two trials. We found that in Trial 1, both dogs and humans were equally interested in the two patterns, but in Trial 2 of both species, looking times towards the dependent pattern decreased, whereas they increased towards the independent pattern. We argue that dogs and humans spontaneously recognized the specific pattern and habituated to it rapidly, but continued to show interest in the 'puzzling' pattern. This suggests that both species tend to recognize inanimate agents as animate relying solely on their motions. © 2017 The Author(s).
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception of human locomotion. Experiment 1 shows that human newborns prefer a point-light walker display representing human locomotion as if on a treadmill over random motion. However, no preference for biological movement is observed in Experiment 2 when both biological and random motion displays are presented with translational displacement. Experiments 3 and 4 show that newborns exhibit preference for translated biological motion (Experiment 3) and random motion (Experiment 4) displays over the same configurations moving without translation. These findings reveal that human newborns have a preference for the translational component of movement independently of the presence of biological kinematics. The outcome suggests that translation constitutes the first step in development of visual preference for biological motion. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J. J.; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine
2017-01-01
The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer’s motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex. PMID:28861024
Causal capture effects in chimpanzees (Pan troglodytes).
Matsuno, Toyomi; Tomonaga, Masaki
2017-01-01
Extracting a cause-and-effect structure from the physical world is an important demand for animals living in dynamically changing environments. Human perceptual and cognitive mechanisms are known to be sensitive and tuned to detect and interpret such causal structures. In contrast to rigorous investigations of human causal perception, the phylogenetic roots of this perception are not well understood. In the present study, we aimed to investigate the susceptibility of nonhuman animals to mechanical causality by testing whether chimpanzees perceived an illusion called causal capture (Scholl & Nakayama, 2002). Causal capture is a phenomenon in which a type of bistable visual motion of objects is perceived as causal collision due to a bias from a co-occurring causal event. In our experiments, we assessed the susceptibility of perception of a bistable stream/bounce motion event to a co-occurring causal event in chimpanzees. The results show that, similar to in humans, causal "bounce" percepts were significantly increased in chimpanzees with the addition of a task-irrelevant causal bounce event that was synchronously presented. These outcomes suggest that the perceptual mechanisms behind the visual interpretation of causal structures in the environment are evolutionarily shared between human and nonhuman animals. Copyright © 2016 Elsevier B.V. All rights reserved.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Psilocybin impairs high-level but not low-level motion perception.
Carter, Olivia L; Pettigrew, John D; Burr, David C; Alais, David; Hasler, Felix; Vollenweider, Franz X
2004-08-26
The hallucinogenic serotonin(1A&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.
Translation and articulation in biological motion perception.
Masselink, Jana; Lappe, Markus
2015-08-01
Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.
Modeling human behaviors and reactions under dangerous environment.
Kang, J; Wright, D K; Qin, S F; Zhao, Y
2005-01-01
This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.
[Comparative analysis of light sensitivity, depth and motion perception in animals and humans].
Schaeffel, F
2017-11-01
This study examined how humans perform regarding light sensitivity, depth perception and motion vision in comparison to various animals. The parameters that limit the performance of the visual system for these different functions were examined. This study was based on literature studies (search in PubMed) and own results. Light sensitivity is limited by the brightness of the retinal image, which in turn is determined by the f‑number of the eye. Furthermore, it is limited by photon noise, thermal decay of rhodopsin, noise in the phototransduction cascade and neuronal processing. In invertebrates, impressive optical tricks have been developed to increase the number of photons reaching the photoreceptors. Furthermore, the spontaneous decay of the photopigment is lower in invertebrates at the cost of higher energy consumption. For depth perception at close range, stereopsis is the most precise but is available only to a few vertebrates. In contrast, motion parallax is used by many species including vertebrates as well as invertebrates. In a few cases accommodation is used for depth measurements or chromatic aberration. In motion vision the temporal resolution of the eye is most important. The ficker fusion frequency correlates in vertebrates with metabolic turnover and body temperature but also has very high values in insects. Apart from that the flicker fusion frequency generally declines with increasing body weight. Compared to animals the performance of the visual system in humans is among the best regarding light sensitivity, is the best regarding depth resolution and in the middle range regarding motion resolution.
Gravito-Inertial Force Resolution in Perception of Synchronized Tilt and Translation
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Holly, Jan; Zhang, Guen-Lu
2011-01-01
Natural movements in the sagittal plane involve pitch tilt relative to gravity combined with translation motion. The Gravito-Inertial Force (GIF) resolution hypothesis states that the resultant force on the body is perceptually resolved into tilt and translation consistently with the laws of physics. The purpose of this study was to test this hypothesis for human perception during combined tilt and translation motion. EXPERIMENTAL METHODS: Twelve subjects provided verbal reports during 0.3 Hz motion in the dark with 4 types of tilt and/or translation motion: 1) pitch tilt about an interaural axis at +/-10deg or +/-20deg, 2) fore-aft translation with acceleration equivalent to +/-10deg or +/-20deg, 3) combined "in phase" tilt and translation motion resulting in acceleration equivalent to +/-20deg, and 4) "out of phase" tilt and translation motion that maintained the resultant gravito-inertial force aligned with the longitudinal body axis. The amplitude of perceived pitch tilt and translation at the head were obtained during separate trials. MODELING METHODS: Three-dimensional mathematical modeling was performed to test the GIF-resolution hypothesis using a dynamical model. The model encoded GIF-resolution using the standard vector equation, and used an internal model of motion parameters, including gravity. Differential equations conveyed time-varying predictions. The six motion profiles were tested, resulting in predicted perceived amplitude of tilt and translation for each. RESULTS: The modeling results exhibited the same pattern as the experimental results. Most importantly, both modeling and experimental results showed greater perceived tilt during the "in phase" profile than the "out of phase" profile, and greater perceived tilt during combined "in phase" motion than during pure tilt of the same amplitude. However, the model did not predict as much perceived translation as reported by subjects during pure tilt. CONCLUSION: Human perception is consistent with the GIF-resolution hypothesis even when the gravito-inertial force vector remains aligned with the body during periodic motion. Perception is also consistent with GIF-resolution in the opposite condition, when the gravito-inertial force vector angle is enhanced by synchronized tilt and translation.
Observation and imitation of actions performed by humans, androids, and robots: an EMG study
Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.
2015-01-01
Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow
Katsuyama, Narumi; Usui, Nobuo; Taira, Masato
2016-01-01
A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999
The economics of motion perception and invariants of visual sensitivity.
Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael
2007-06-21
Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.
Auditory perception of a human walker.
Cottrell, David; Campbell, Megan E J
2014-01-01
When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.
Peelen, Marius V; Wiggett, Alison J; Downing, Paul E
2006-03-16
Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.
Acquiring neural signals for developing a perception and cognition model
NASA Astrophysics Data System (ADS)
Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert
2012-06-01
The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.
Eye Movements in Darkness Modulate Self-Motion Perception.
Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter
2017-01-01
During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.
Eye Movements in Darkness Modulate Self-Motion Perception
Pomante, Antonella
2017-01-01
Abstract During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. PMID:28144623
Mawase, Firas; Karniel, Amir; Donchin, Opher; Rothwell, John; Nisky, Ilana; Davare, Marco
2016-01-01
How motion and sensory inputs are combined to assess an object's stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior. SIGNIFICANCE STATEMENT When selecting an object such as a ripe fruit or sofa, we need to assess the object's stiffness. Because we lack dedicated stiffness sensors, we rely on an as yet unknown mechanism that generates stiffness percepts by combining position and force signals. Here, we found that the posterior parietal cortex (PPC) contributes to combining position and force signals for stiffness estimation. This finding challenges the classical view about the role of the PPC in regulating position signals only for motion control because we highlight a key role of the PPC in perception that is disassociated from action. Altogether this sheds light on brain mechanisms underlying the interaction between action and perception and may help in the development of better teleoperation systems and rehabilitation of patients with sensory impairments. PMID:27733607
Leib, Raz; Mawase, Firas; Karniel, Amir; Donchin, Opher; Rothwell, John; Nisky, Ilana; Davare, Marco
2016-10-12
How motion and sensory inputs are combined to assess an object's stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior. When selecting an object such as a ripe fruit or sofa, we need to assess the object's stiffness. Because we lack dedicated stiffness sensors, we rely on an as yet unknown mechanism that generates stiffness percepts by combining position and force signals. Here, we found that the posterior parietal cortex (PPC) contributes to combining position and force signals for stiffness estimation. This finding challenges the classical view about the role of the PPC in regulating position signals only for motion control because we highlight a key role of the PPC in perception that is disassociated from action. Altogether this sheds light on brain mechanisms underlying the interaction between action and perception and may help in the development of better teleoperation systems and rehabilitation of patients with sensory impairments. Copyright © 2016 Leib et al.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
Normal form from biological motion despite impaired ventral stream function.
Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P
2011-04-01
We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals
Czuba, Thaddeus B.; Cormack, Lawrence K.; Huk, Alexander C.
2016-01-01
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no “cross-cue” adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how—or indeed if—these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. PMID:27798134
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.
Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2016-10-19
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. Copyright © 2016 the authors 0270-6474/16/3610791-12$15.00/0.
Modeling Visual, Vestibular and Oculomotor Interactions in Self-Motion Estimation
NASA Technical Reports Server (NTRS)
Perrone, John
1997-01-01
A computational model of human self-motion perception has been developed in collaboration with Dr. Leland S. Stone at NASA Ames Research Center. The research included in the grant proposal sought to extend the utility of this model so that it could be used for explaining and predicting human performance in a greater variety of aerospace applications. This extension has been achieved along with physiological validation of the basic operation of the model.
NASA Technical Reports Server (NTRS)
Young, L. R.; Oman, C. M.; Curry, R. E.
1977-01-01
Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction
Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick
2016-01-01
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907
1993-02-01
Taylor & Creelman , 1967.) Measurements were taken with the sled moving either in forward or backward direction, each threshold being measured once...Furrer, R. & Messerschmid, E. (1990). Space Sickness on Earth. Experimental Brain Research 79(3), 661-663. Taylor, M.M. & Creelman . C.D. (1967). PEST
Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People
NASA Astrophysics Data System (ADS)
Thomik, Andreas; Faisal, A. Aldo
2015-03-01
Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).
Model of human visual-motion sensing
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.
1985-01-01
A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.
Blindsight modulation of motion perception.
Intriligator, James M; Xie, Ruiman; Barton, Jason J S
2002-11-15
Monkey data suggest that of all perceptual abilities, motion perception is the most likely to survive striate damage. The results of studies on motion blindsight in humans, though, are mixed. We used an indirect strategy to examine how responses to visible stimuli were modulated by blind-field stimuli. In a 26-year-old man with focal striate lesions, discrimination of visible optic flow was enhanced about 7% by blind-field flow, even though discrimination of optic flow in the blind field alone (the direct strategy) was at chance. Pursuit of an imagined target using peripheral cues showed reduced variance but not increased gain with blind-field cues. Preceding blind-field prompts shortened reaction times to visible targets by about 10 msec, but there was no attentional crowding of visible stimuli by blind-field distractors. A similar efficacy of indirect blind-field optic flow modulation was found in a second patient with residual vision after focal striate damage, but not in a third with more extensive medial occipito-temporal damage. We conclude that indirect modulatory strategies are more effective than direct forced-choice methods at revealing residual motion perception after focal striate lesions.
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Slow and fast visual motion channels have independent binocular-rivalry stages.
van de Grind, W. A.; van Hof, P.; van der Smagt, M. J.; Verstraten, F. A.
2001-01-01
We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness. PMID:11270442
Relating Attention to Visual Mechanisms
1989-02-28
VI., Hillsdale, NJ:Erlbaum. Biederman , I. ( 1987 ) Recognition-by-components: A theory of human image understanding. Psychological Review, 94:115-147...perception (Coren, 1969; Festinger, Coren & Rivers, 1970; Brussell & Festinger, 1973; Brussell, 1973), motion perception (Dick, Ullman & Sagi, 1987 ...1985; Peterson, 1986; Hochberg & Peterson, 1987 ). These studies vary in the success with which they isolate a particular computation and some suffer
Directional asymmetries and age effects in human self-motion perception.
Roditi, Rachel E; Crane, Benjamin T
2012-06-01
Directional asymmetries in vestibular reflexes have aided the diagnosis of vestibular lesions; however, potential asymmetries in vestibular perception have not been well defined. This investigation sought to measure potential asymmetries in human vestibular perception. Vestibular perception thresholds were measured in 24 healthy human subjects between the ages of 21 and 68 years. Stimuli consisted of a single cycle of sinusoidal acceleration in a single direction lasting 1 or 2 s (1 or 0.5 Hz), delivered in sway (left-right), surge (forward-backward), heave (up-down), or yaw rotation. Subject identified self-motion directions were analyzed using a forced choice technique, which permitted thresholds to be independently determined for each direction. Non-motion stimuli were presented to measure possible response bias. A significant directional asymmetry in the dynamic response occurred in 27% of conditions tested within subjects, and in at least one type of motion in 92% of subjects. Directional asymmetries were usually consistent when retested in the same subject but did not occur consistently in one direction across the population with the exception of heave at 0.5 Hz. Responses during null stimuli presentation suggested that asymmetries were not due to biased guessing. Multiple models were applied and compared to determine if sensitivities were direction specific. Using Akaike information criterion, it was found that the model with direction specific sensitivities better described the data in 86% of runs when compared with a model that used the same sensitivity for both directions. Mean thresholds for yaw were 1.3±0.9°/s at 0.5 Hz and 0.9±0.7°/s at 1 Hz and were independent of age. Thresholds for surge and sway were 1.7±0.8 cm/s at 0.5 Hz and 0.7±0.3 cm/s at 1.0 Hz for subjects <50 and were significantly higher in subjects >50 years old. Heave thresholds were higher and were independent of age.
Rocking or Rolling – Perception of Ambiguous Motion after Returning from Space
Clément, Gilles; Wood, Scott J.
2014-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1–2 days. During dynamic linear acceleration (0.15–0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore–aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions. PMID:25354042
Rocking or rolling--perception of ambiguous motion after returning from space.
Clément, Gilles; Wood, Scott J
2014-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1-2 days. During dynamic linear acceleration (0.15-0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore-aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.
Ida, Hirofumi; Fukuhara, Kazunobu; Ishii, Motonobu
2012-01-01
The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure) were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster) of one of the joint rotations (wrist, elbow, or shoulder). Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.
NASA Technical Reports Server (NTRS)
vonGierke, Henning E.; Parker, Donald E.
1993-01-01
Human graviceptors, located in the trunk by Mittelstaedt probably transduce acceleration by abdominal viscera motion. As demonstrated previously in biodynamic vibration and impact tolerance research the thoraco-abdominal viscera exhibit a resonance at 4 to 6 Hz. Behavioral observations and mechanical models of otolith graviceptor response indicate a phase shift increasing with frequency between 0.01 and O.5 Hz. Consequently the potential exists for intermodality sensory conflict between vestibular and visceral graviceptor signals at least at the mechanical receptor level. The frequency range of this potential conflict corresponds with the primary frequency range for motion sickness incidence in transportation, in subjects rotated about Earth-horizontal axes (barbecue spit stimulation) and in periodic parabolic flight microgravity research and also for erroneous perception of vertical oscillations in helicopters. We discuss the implications of this hypothesis for previous self motion perception research and suggestions for various future studies.
Human discrimination of visual direction of motion with and without smooth pursuit eye movements
NASA Technical Reports Server (NTRS)
Krukowski, Anton E.; Pirog, Kathleen A.; Beutter, Brent R.; Brooks, Kevin R.; Stone, Leland S.
2003-01-01
It has long been known that ocular pursuit of a moving target has a major influence on its perceived speed (Aubert, 1886; Fleischl, 1882). However, little is known about the effect of smooth pursuit on the perception of target direction. Here we compare the precision of human visual-direction judgments under two oculomotor conditions (pursuit vs. fixation). We also examine the impact of stimulus duration (200 ms vs. 800 ms) and absolute direction (cardinal vs. oblique). Our main finding is that direction discrimination thresholds in the fixation and pursuit conditions are indistinguishable. Furthermore, the two oculomotor conditions showed oblique effects of similar magnitudes. These data suggest that the neural direction signals supporting perception are the same with or without pursuit, despite remarkably different retinal stimulation. During fixation, the stimulus information is restricted to large, purely peripheral retinal motion, while during steady-state pursuit, the stimulus information consists of small, unreliable foveal retinal motion and a large efference-copy signal. A parsimonious explanation of our findings is that the signal limiting the precision of direction judgments is a neural estimate of target motion in head-centered (or world-centered) coordinates (i.e., a combined retinal and eye motion signal) as found in the medial superior temporal area (MST), and not simply an estimate of retinal motion as found in the middle temporal area (MT).
Mom's shadow: structure-from-motion in newly hatched chicks as revealed by an imprinting procedure.
Mascalzoni, Elena; Regolin, Lucia; Vallortigara, Giorgio
2009-03-01
The ability to recognize three-dimensional objects from two-dimensional (2-D) displays was investigated in domestic chicks, focusing on the role of the object's motion. In Experiment 1 newly hatched chicks, imprinted on a three-dimensional (3-D) object, were allowed to choose between the shadows of the familiar object and of an object never seen before. In Experiments 2 and 3 random-dot displays were used to produce the perception of a solid shape only when set in motion. Overall, the results showed that domestic chicks were able to recognize familiar shapes from 2-D motion stimuli. It is likely that similar general mechanisms underlying the perception of structure-from-motion and the extraction of 3-D information are shared by humans and animals. The present data shows that they occur similarly in birds as known for mammals, two separate vertebrate classes; this possibly indicates a common phylogenetic origin of these processes.
Vibro-Perception of Optical Bio-Inspired Fiber-Skin.
Li, Tao; Zhang, Sheng; Lu, Guo-Wei; Sunami, Yuta
2018-05-12
In this research, based on the principle of optical interferometry, the Mach-Zehnder and Optical Phase-locked Loop (OPLL) vibro-perception systems of bio-inspired fiber-skin are designed to mimic the tactile perception of human skin. The fiber-skin is made of the optical fiber embedded in the silicone elastomer. The optical fiber is an instinctive and alternative sensor for tactile perception with high sensitivity and reliability, also low cost and susceptibility to the magnetic interference. The silicone elastomer serves as a substrate with high flexibility and biocompatibility, and the optical fiber core serves as the vibro-perception sensor to detect physical motions like tapping and sliding. According to the experimental results, the designed optical fiber-skin demonstrates the ability to detect the physical motions like tapping and sliding in both the Mach-Zehnder and OPLL vibro-perception systems. For direct contact condition, the OPLL vibro-perception system shows better performance compared with the Mach-Zehnder vibro-perception system. However, the Mach-Zehnder vibro-perception system is preferable to the OPLL system in the indirect contact experiment. In summary, the fiber-skin is validated to have light touch character and excellent repeatability, which is highly-suitable for skin-mimic sensing.
Effect of contrast on the perception of direction of a moving pattern
NASA Technical Reports Server (NTRS)
Stone, L. S.; Watson, A. B.; Mulligan, J. B.
1989-01-01
A series of experiments examining the effect of contrast on the perception of moving plaids was performed to test the hypothesis that the human visual system determines the direction of a moving plaid in a two-staged process: decomposition into component motion followed by application of the intersection-of-contraints rule. Although there is recent evidence that the first tenet of the hypothesis is correct, i.e., that plaid motion is initially decomposed into the motion of the individual grating components, the nature of the second-stage combination rule has not yet been established. It was found that when the gratings within the plaid are of different contrast the preceived direction is not predicted by the intersection-of-constraints rule. There is a strong (up to 20 deg) bias in the direction of the higher-constrast grating. A revised model, which incorporates a contrast-dependent weighting of perceived grating speed as observed for one-dimensional patterns, can quantitatively predict most of the results. The results are then discussed in the context of various models of human visual motion processing and of physiological responses of neurons in the primate visual system.
Lahnakoski, Juha M; Glerean, Enrico; Salmi, Juha; Jääskeläinen, Iiro P; Sams, Mikko; Hari, Riitta; Nummenmaa, Lauri
2012-01-01
Despite the abundant data on brain networks processing static social signals, such as pictures of faces, the neural systems supporting social perception in naturalistic conditions are still poorly understood. Here we delineated brain networks subserving social perception under naturalistic conditions in 19 healthy humans who watched, during 3-T functional magnetic resonance imaging (fMRI), a set of 137 short (approximately 16 s each, total 27 min) audiovisual movie clips depicting pre-selected social signals. Two independent raters estimated how well each clip represented eight social features (faces, human bodies, biological motion, goal-oriented actions, emotion, social interaction, pain, and speech) and six filler features (places, objects, rigid motion, people not in social interaction, non-goal-oriented action, and non-human sounds) lacking social content. These ratings were used as predictors in the fMRI analysis. The posterior superior temporal sulcus (STS) responded to all social features but not to any non-social features, and the anterior STS responded to all social features except bodies and biological motion. We also found four partially segregated, extended networks for processing of specific social signals: (1) a fronto-temporal network responding to multiple social categories, (2) a fronto-parietal network preferentially activated to bodies, motion, and pain, (3) a temporo-amygdalar network responding to faces, social interaction, and speech, and (4) a fronto-insular network responding to pain, emotions, social interactions, and speech. Our results highlight the role of the pSTS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.
Abstracting Dance: Detaching Ourselves from the Habitual Perception of the Moving Body.
Aviv, Vered
2017-01-01
This work explores to what extent the notion of abstraction in dance is valid and what it entails. Unlike abstraction in the fine arts that aims for a certain independence from representation of the external world through the use of non-figurative elements, dance is realized by a highly familiar object - the human body. In fact, we are all experts in recognizing the human body. For instance, we can mentally reconstruct its motion from minimal information (e.g., via a "dot display"), predict body trajectory during movement and identify emotional expressions of the body. Nonetheless, despite the presence of a human dancer on stage and our extreme familiarity with the human body, the process of abstraction is applicable also to dance. Abstract dance removes itself from familiar daily movements, violates the observer's predictions about future movements and detaches itself from narratives. In so doing, abstract dance exposes the observer to perceptions of unfamiliar situations, thus paving the way to new interpretations of human motion and hence to perceiving ourselves differently in both the physical and emotional domains.
Abstracting Dance: Detaching Ourselves from the Habitual Perception of the Moving Body
Aviv, Vered
2017-01-01
This work explores to what extent the notion of abstraction in dance is valid and what it entails. Unlike abstraction in the fine arts that aims for a certain independence from representation of the external world through the use of non-figurative elements, dance is realized by a highly familiar object – the human body. In fact, we are all experts in recognizing the human body. For instance, we can mentally reconstruct its motion from minimal information (e.g., via a “dot display”), predict body trajectory during movement and identify emotional expressions of the body. Nonetheless, despite the presence of a human dancer on stage and our extreme familiarity with the human body, the process of abstraction is applicable also to dance. Abstract dance removes itself from familiar daily movements, violates the observer’s predictions about future movements and detaches itself from narratives. In so doing, abstract dance exposes the observer to perceptions of unfamiliar situations, thus paving the way to new interpretations of human motion and hence to perceiving ourselves differently in both the physical and emotional domains. PMID:28559871
Social forces for team coordination in ball possession game
NASA Astrophysics Data System (ADS)
Yokoyama, Keiko; Shima, Hiroyuki; Fujii, Keisuke; Tabuchi, Noriyuki; Yamamoto, Yuji
2018-02-01
Team coordination is a basic human behavioral trait observed in many real-life communities. To promote teamwork, it is important to cultivate social skills that elicit team coordination. In the present work, we consider which social skills are indispensable for individuals performing a ball possession game in soccer. We develop a simple social force model that describes the synchronized motion of offensive players. Comparing the simulation results with experimental observations, we uncovered that the cooperative social force, a measure of perception skill, has the most important role in reproducing the harmonized collective motion of experienced players in the task. We further developed an experimental tool that facilitates real players' perceptions of interpersonal distance, revealing that the tool improves novice players' motions as if the cooperative social force were imposed.
Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao
2017-10-01
Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
Biological Motion Task Performance Predicts Superior Temporal Sulcus Activity
ERIC Educational Resources Information Center
Herrington, John D.; Nymberg, Charlotte; Schultz, Robert T.
2011-01-01
Numerous studies implicate superior temporal sulcus (STS) in the perception of human movement. More recent theories hold that STS is also involved in the "understanding" of human movement. However, almost no studies to date have associated STS function with observable variability in action understanding. The present study directly associated STS…
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Object motion perception is shaped by the motor control mechanism of ocular pursuit.
Schweigart, G; Mergner, T; Barnes, G R
2003-02-01
It is still a matter of debate whether the control of smooth pursuit eye movements involves an internal drive signal from object motion perception. We measured human target velocity and target position perceptions and compared them with the presumed pursuit control mechanism (model simulations). We presented normal subjects (Ns) and vestibular loss patients (Ps) with visual target motion in space. Concurrently, a visual background was presented, which was kept stationary or was moved with or against the target (five combinations). The motion stimuli consisted of smoothed ramp displacements with different dominant frequencies and peak velocities (0.05, 0.2, 0.8 Hz; 0.2-25.6 degrees /s). Subjects always pursued the target with their eyes. In a first experiment they gave verbal magnitude estimates of perceived target velocity in space and of self-motion in space. The target velocity estimates of both Ns and Ps tended to saturate at 0.8 Hz and with peak velocities >3 degrees /s. Below these ranges the velocity estimates showed a pronounced modulation in relation to the relative target-to-background motion ('background effect'; for example, 'background with'-motion decreased and 'against'-motion increased perceived target velocity). Pronounced only in Ps and not in Ns, there was an additional modulation in relation to the relative head-to-background motion, which co-varied with an illusion of self-motion in space (circular vection, CV) in Ps. In a second experiment, subjects performed retrospective reproduction of perceived target start and end positions with the same stimuli. Perceived end position was essentially veridical in both Ns and Ps (apart from a small constant offset). Reproduced start position showed an almost negligible background effect in Ns. In contrast, it showed a pronounced modulation in Ps, which again was related to CV. The results were compared with simulations of a model that we have recently presented for velocity control of eye pursuit. We found that the main features of target velocity perception (in terms of dynamics and modulation by background) closely correspond to those of the internal drive signal for target pursuit, compatible with the notion of a common source of both the perception and the drive signal. In contrast, the eye pursuit movement is almost free of the background effect. As an explanation, we postulate that the target-to-background component in the target pursuit drive signal largely neutralises the background-to-eye retinal slip signal (optokinetic reflex signal) that feeds into the eye premotor mechanism as a competitor of the target retinal slip signal. An extension of the model allowed us to simulate also the findings of the target position perception. It is assumed to be represented in a perceptual channel that is distinct from the velocity perception, building on an efference copy of the essentially accurate eye position. We hold that other visuomotor behaviour, such as target reaching with the hand, builds mainly on this target position percept and therefore is not contaminated by the background effect in the velocity percept. Generally, the coincidence of an erroneous velocity percept and an almost perfect eye pursuit movement during background motion is discussed as an instructive example of an action-perception dissociation. This dissociation cannot be taken to indicate that the two functions are internally represented in separate brain control systems, but rather reflects the intimate coupling between both functions.
Feature-Based Attention in Early Vision for the Modulation of Figure–Ground Segregation
Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko
2013-01-01
We investigated psychophysically whether feature-based attention modulates the perception of figure–ground (F–G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1–V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F–G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object. PMID:23515841
Feature-based attention in early vision for the modulation of figure-ground segregation.
Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko
2013-01-01
We investigated psychophysically whether feature-based attention modulates the perception of figure-ground (F-G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1-V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F-G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object.
Two-year-olds with autism orient to non-social contingencies rather than biological motion.
Klin, Ami; Lin, David J; Gorrindo, Phillip; Ramsay, Gordon; Jones, Warren
2009-05-14
Typically developing human infants preferentially attend to biological motion within the first days of life. This ability is highly conserved across species and is believed to be critical for filial attachment and for detection of predators. The neural underpinnings of biological motion perception are overlapping with brain regions involved in perception of basic social signals such as facial expression and gaze direction, and preferential attention to biological motion is seen as a precursor to the capacity for attributing intentions to others. However, in a serendipitous observation, we recently found that an infant with autism failed to recognize point-light displays of biological motion, but was instead highly sensitive to the presence of a non-social, physical contingency that occurred within the stimuli by chance. This observation raised the possibility that perception of biological motion may be altered in children with autism from a very early age, with cascading consequences for both social development and the lifelong impairments in social interaction that are a hallmark of autism spectrum disorders. Here we show that two-year-olds with autism fail to orient towards point-light displays of biological motion, and their viewing behaviour when watching these point-light displays can be explained instead as a response to non-social, physical contingencies--physical contingencies that are disregarded by control children. This observation has far-reaching implications for understanding the altered neurodevelopmental trajectory of brain specialization in autism.
Two-year-olds with autism orient to nonsocial contingencies rather than biological motion
Klin, Ami; Lin, David J.; Gorrindo, Phillip; Ramsay, Gordon; Jones, Warren
2009-01-01
Typically-developing human infants preferentially attend to biological motion within the first days of life1. This ability is highly conserved across species2,3 and is believed to be critical for filial attachment and for detection of predators4. The neural underpinnings of biological motion perception are overlapping with brain regions involved in perception of basic social signals such as facial expression and gaze direction5, and preferential attention to biological motion is seen as a precursor to the capacity for attributing intentions to others6. However, in a serendipitous observation7, we recently found that an infant with autism failed to recognize point-light displays of biological motion but was instead highly sensitive to the presence of a non-social, physical contingency that occurred within the stimuli by chance. This observation raised the hypothesis that perception of biological motion may be altered in children with autism from a very early age, with cascading consequences for both social development and for the lifelong impairments in social interaction that are a hallmark of autism spectrum disorders8. Here we show that two-year-olds with autism fail to orient towards point-light displays of biological motion, and that their viewing behavior when watching these point-light displays can be explained instead as a response to non-social, physical contingencies physical contingencies that are disregarded by control children. This observation has far-reaching implications for understanding the altered neurodevelopmental trajectory of brain specialization in autism9. PMID:19329996
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Creating stimuli for the study of biological-motion perception.
Dekeyser, Mathias; Verfaillie, Karl; Vanrie, Jan
2002-08-01
In the perception of biological motion, the stimulus information is confined to a small number of lights attached to the major joints of a moving person. Despite this drastic degradation of the stimulus information, the human visual apparatus organizes the swarm of moving dots into a vivid percept of a moving biological creature. Several techniques have been proposed to create point-light stimuli: placing dots at strategic locations on photographs or films, video recording a person with markers attached to the body, computer animation based on artificial synthesis, and computer animation based on motion-capture data. A description is given of the technique we are currently using in our laboratory to produce animated point-light figures. The technique is based on a combination of motion capture and three-dimensional animation software (Character Studio, Autodesk, Inc., 1998). Some of the advantages of our approach are that the same actions can be shown from any viewpoint, that point-light versions, as well as versions with a full-fleshed character, can be created of the same actions, and that point lights can indicate the center of a joint (thereby eliminating several disadvantages associated with other techniques).
Wilkins, Luke; Gray, Rob; Gaska, James; Winterbottom, Marc
2013-12-30
A driving simulator was used to examine the relationship between motion perception and driving performance. Although motion perception test scores have been shown to be related to driving safety, it is not clear which combination of tests are the best predictors and whether motion perception training can improve driving performance. In experiment 1, 60 younger drivers (22.4 ± 2.5 years) completed three motion perception tests (2-dimensional [2D] motion-defined letter [MDL] identification, 3D motion in depth sensitivity [MID], and dynamic visual acuity [DVA]) followed by two driving tests (emergency braking [EB] and hazard perception [HP]). In experiment 2, 20 drivers (21.6 ± 2.1 years) completed 6 weeks of motion perception training (using the MDL, MID, and DVA tests), while 20 control drivers (22.0 ± 2.7 years) completed an online driving safety course. The EB performance was measured before and after training. In experiment 1, MDL (r = 0.34) and MID (r = 0.46) significantly correlated with EB score. The change in DVA score as a function of target speed (i.e., "velocity susceptibility") was correlated most strongly with HP score (r = -0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to posttreatment, while there was no significant change for the control group: t(38) = 2.24, P = 0.03. Tests of 3D motion perception are the best predictor of EB, while DVA velocity susceptibility is the best predictor of hazard perception. Motion perception training appears to result in faster braking responses.
Motion illusion – evidence towards human vestibulo-thalamic projections
Shaikh, Aasef G.; Straumann, Dominik; Palla, Antonella
2017-01-01
Introduction Contemporary studies speculated that cerebellar network responsible for motion perception projects to the cerebral cortex via vestibulo-thalamus. Here we sought for the physiological properties of vestibulo-thalamic pathway responsible for the motion perception. Methods Healthy subjects and the patient with focal vestibulo-thalamic lacunar stroke spun a hand-held rheostat to approximate the value of perceived angular velocity during whole-body passive earth-vertical axis rotations in yaw plane. Vestibulo-ocular reflex was simultaneously measured with high-resolution search coils (paradigm 1). In primates the vestibulo-thalamic projections remain medial and then dorsomedial to the subthalamus. Therefore the paradigm 2 assessed the effects of high-frequency subthalamic nucleus electrical stimulation through the medial and caudal deep brain stimulation electrode in five subjects with Parkinson’s disease. Results Paradigm 1 discovered directional mismatch of perceived rotation in a patient with vestiblo-thalamic lacune. There was no such mismatch in vestibulo-ocular reflex. Healthy subjects did not have such directional discrepancy of perceived motion. The results confirmed that perceived angular motion is relayed through the thalamus. Stimulation through medial and caudal-most electrode of subthalamic deep brain stimulator in paradigm 2 resulted in perception of rotational motion in the horizontal semicircular canal plane. One patient perceived riding a swing, a complex motion, possibly the combination of vertical canal and otolith derived signals representing pitch and fore-aft motion respectively. Conclusion The results examined physiological properties of the vestibulo-thalamic pathway that passes in proximity to the subthalamic nucleus conducting pure semicircular canal signals and convergent signals from the semicircular canals and the otoliths. PMID:28127679
Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.
Durant, Szonya; Wall, Matthew B; Zanker, Johannes M
2011-09-09
Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.
Human Sensibility Ergonomics Approach to Vehicle Simulator Based on Dynamics
NASA Astrophysics Data System (ADS)
Son, Kwon; Choi, Kyung-Hyun; Yoon, Ji-Sup
Simulators have been used to evaluate drivers' reactions to various transportation products. Most research, however, has concentrated on their technical performance. This paper considers driver's motion perception on a vehicle simulator through the analysis of human sensibility ergonomics. A sensibility ergonomic method is proposed in order to improve the reliability of vehicle simulators. A simulator in a passenger vehicle consists of three main modules such as vehicle dynamics, virtual environment, and motion representation modules. To evaluate drivers' feedback, human perceptions are categorized into a set verbal expressions collected and investigated to find the most appropriate ones for translation and angular accelerations of the simulator. The cut-off frequency of the washout filter in the representation module is selected as one sensibility factor. Sensibility experiments were carried out to find a correlation between the expressions and the cut-off frequency of the filter. This study suggests a methodology to obtain an ergonomic database that can be applied to the sensibility evaluation of dynamic simulators.
2016-09-28
previous research and modeling results. The OMS and Perception Toolbox were used to perform a case study of an F18 mishap. Model results imply that...request documents from DTIC. Change of Address Organizations receiving reports from the U.S. Army Aeromedical Research Laboratory on automatic...54 Coriolis head movement during a coordinated turn. .............................................55 Case Study
Representational Momentum for the Human Body: Awkwardness Matters, Experience Does Not
ERIC Educational Resources Information Center
Wilson, Margaret; Lancaster, Jessy; Emmorey, Karen
2010-01-01
Perception of the human body appears to involve predictive simulations that project forward to track unfolding body-motion events. Here we use representational momentum (RM) to investigate whether implicit knowledge of a learned arbitrary system of body movement such as sign language influences this prediction process, and how this compares to…
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
A computational model for reference-frame synthesis with applications to motion perception.
Clarke, Aaron M; Öğmen, Haluk; Herzog, Michael H
2016-09-01
As discovered by the Gestaltists, in particular by Duncker, we often perceive motion to be within a non-retinotopic reference frame. For example, the motion of a reflector on a bicycle appears to be circular, whereas, it traces out a cycloidal path with respect to external world coordinates. The reflector motion appears to be circular because the human brain subtracts the horizontal motion of the bicycle from the reflector motion. The bicycle serves as a reference frame for the reflector motion. Here, we present a general mathematical framework, based on vector fields, to explain non-retinotopic motion processing. Using four types of non-retinotopic motion paradigms, we show how the theory works in detail. For example, we show how non-retinotopic motion in the Ternus-Pikler display can be computed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris
2012-01-01
Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639
Receptive fields for smooth pursuit eye movements and motion perception.
Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R
2010-12-01
Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.
Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco
2013-05-01
Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.
Being Moved by the Self and Others: Influence of Empathy on Self-Motion Perception
Lopez, Christophe; Falconer, Caroline J.; Mast, Fred W.
2013-01-01
Background The observation of conspecifics influences our bodily perceptions and actions: Contagious yawning, contagious itching, or empathy for pain, are all examples of mechanisms based on resonance between our own body and others. While there is evidence for the involvement of the mirror neuron system in the processing of motor, auditory and tactile information, it has not yet been associated with the perception of self-motion. Methodology/Principal Findings We investigated whether viewing our own body, the body of another, and an object in motion influences self-motion perception. We found a visual-vestibular congruency effect for self-motion perception when observing self and object motion, and a reduction in this effect when observing someone else's body motion. The congruency effect was correlated with empathy scores, revealing the importance of empathy in mirroring mechanisms. Conclusions/Significance The data show that vestibular perception is modulated by agent-specific mirroring mechanisms. The observation of conspecifics in motion is an essential component of social life, and self-motion perception is crucial for the distinction between the self and the other. Finally, our results hint at the presence of a “vestibular mirror neuron system”. PMID:23326302
Tracking without perceiving: a dissociation between eye movements and motion perception.
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-02-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.
Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception
Spering, Miriam; Pomplun, Marc; Carrasco, Marisa
2011-01-01
Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353
Anand, Sulekha; Bridgeman, Bruce
2002-02-01
Perception of image displacement is suppressed during saccadic eye movements. We probed the source of saccadic suppression of displacement by testing whether it selectively affects chromatic- or luminance-based motion information. Human subjects viewed a stimulus in which chromatic and luminance cues provided conflicting information about displacement direction. Apparent motion occurred during either fixation or a 19.5 degree saccade. Subjects detected motion and discriminated displacement direction in each trial. They reported motion in over 90% of fixation trials and over 70% of saccade trials. During fixation, the probability of perceiving the direction carried by chromatic cues decreased as luminance contrast increased. During saccades, subjects tended to perceive the direction indicated by luminance cues when luminance contrast was high. However, when luminance contrast was low, subjects showed no preference for the chromatic- or luminance-based direction. Thus magnocellular channels are suppressed, while stimulation of parvocellular channels is below threshold, so that neither channel drives motion perception during saccades. These results confirm that magnocellular inhibition is the source of saccadic suppression.
Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.
Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J
2015-09-01
The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.
Human activity discrimination for maritime application
NASA Astrophysics Data System (ADS)
Boettcher, Evelyn; Deaver, Dawne M.; Krapels, Keith
2008-04-01
The US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is investigating how motion affects the target acquisition model (NVThermIP) sensor performance estimates. This paper looks specifically at estimating sensor performance for the task of discriminating human activities on watercraft, and was sponsored by the Office of Naval Research (ONR). Traditionally, sensor models were calibrated using still images. While that approach is sufficient for static targets, video allows one to use motion cues to aid in discerning the type of human activity more quickly and accurately. This, in turn, will affect estimated sensor performance and these effects are measured in order to calibrate current target acquisition models for this task. The study employed an eleven alternative forced choice (11AFC) human perception experiment to measure the task difficulty of discriminating unique human activities on watercrafts. A mid-wave infrared camera was used to collect video at night. A description of the construction of this experiment is given, including: the data collection, image processing, perception testing and how contrast was defined for video. These results are applicable to evaluate sensor field performance for Anti-Terrorism and Force Protection (AT/FP) tasks for the U.S. Navy.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
Attentional shifts between surfaces: effects on detection and early brain potentials.
Pinilla, T; Cobo, A; Torres, K; Valdes-Sosa, M
2001-06-01
Two consecutive events transforming the same illusory surface in transparent motion (brief changes in direction) can be discriminated with ease, but a prolonged interference ( approximately 500 ms) on the discrimination of the second event arises when different surfaces are concerned [Valdes-Sosa, M., Cobo, A., & Pinilla, T. (2000). Attention to object files defined by transparent motion. Journal of Experimental Psychology: Human Perception and Performance, 26(2), 488-505]. Here we further characterise this phenomenon and compare it to the attentional blink AB [Shapiro, K.L., Raymond, J.E., & Arnell, K.M. (1994). Attention to visual pattern information produces the attentional blink in RSVP. Journal of Experimental Psychology: Human Perception and Performance, 20, 357-371]. Similar to the AB, reduced sensitivity (d') was found in the two-surface condition. However, the two-surface cost was associated with a reduced N1 brain response in contrast to reports for AB [Vogel, E.K., Luck, S.J., & Shapiro, K. (1998). Electrophysiological evidence for a postperceptual locus of suppression during the attentional blink. Journal of Experimental Psychology: Human Perception and Performance, 24(6), 1656-1674]. The results from this study indicate that the two-surface cost corresponds to competitive effects in early vision. Reasons for the discrepancy with the AB study are considered.
A research on motion design for APP's loading pages based on time perception
NASA Astrophysics Data System (ADS)
Cao, Huai; Hu, Xiaoyun
2018-04-01
Due to restrictions caused by objective reasons like network bandwidth, hardware performance and etc., waiting is still an inevitable phenomenon that appears in our using mobile-terminal products. Relevant researches show that users' feelings in a waiting scenario can affect their evaluations on the whole product and services the product provides. With the development of user experience and inter-facial design subjects, the role of motion effect in the interface design has attracted more and more scholars' attention. In the current studies, the research theory of motion design in a waiting scenario is imperfect. This article will use the basic theory and experimental research methods of cognitive psychology to explore the motion design's impact on user's time perception when users are waiting for loading APP pages. Firstly, the article analyzes the factors that affect waiting experience of loading APP pages based on the theory of time perception, and then discusses motion design's impact on the level of time-perception when loading pages and its design strategy. Moreover, by the operation analysis of existing loading motion designs, the article classifies the existing loading motions and designs an experiment to verify the impact of different types of motions on the user's time perception. The result shows that the waiting time perception of mobile's terminals' APPs is related to the loading motion types, the combination type of loading motions can effectively shorten the waiting time perception as it scores a higher mean value in the length of time perception.
Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs
Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.
2009-01-01
Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962
The perception of surface layout during low level flight
NASA Technical Reports Server (NTRS)
Perrone, John A.
1991-01-01
Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
The Default Mode Network Differentiates Biological From Non-Biological Motion
Dayan, Eran; Sella, Irit; Mukovskiy, Albert; Douek, Yehonatan; Giese, Martin A.; Malach, Rafael; Flash, Tamar
2016-01-01
The default mode network (DMN) has been implicated in an array of social-cognitive functions, including self-referential processing, theory of mind, and mentalizing. Yet, the properties of the external stimuli that elicit DMN activity in relation to these domains remain unknown. Previous studies suggested that motion kinematics is utilized by the brain for social-cognitive processing. Here, we used functional MRI to examine whether the DMN is sensitive to parametric manipulations of observed motion kinematics. Preferential responses within core DMN structures differentiating non-biological from biological kinematics were observed for the motion of a realistically looking, human-like avatar, but not for an abstract object devoid of human form. Differences in connectivity patterns during the observation of biological versus non-biological kinematics were additionally observed. Finally, the results additionally suggest that the DMN is coupled more strongly with key nodes in the action observation network, namely the STS and the SMA, when the observed motion depicts human rather than abstract form. These findings are the first to implicate the DMN in the perception of biological motion. They may reflect the type of information used by the DMN in social-cognitive processing. PMID:25217472
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
A Bayesian model of stereopsis depth and motion direction discrimination.
Read, J C A
2002-02-01
The extraction of stereoscopic depth from retinal disparity, and motion direction from two-frame kinematograms, requires the solution of a correspondence problem. In previous psychophysical work [Read and Eagle (2000) Vision Res 40: 3345-3358], we compared the performance of the human stereopsis and motion systems with correlated and anti-correlated stimuli. We found that, although the two systems performed similarly for narrow-band stimuli, broadband anti-correlated kinematograms produced a strong perception of reversed motion, whereas the stereograms appeared merely rivalrous. I now model these psychophysical data with a computational model of the correspondence problem based on the known properties of visual cortical cells. Noisy retinal images are filtered through a set of Fourier channels tuned to different spatial frequencies and orientations. Within each channel, a Bayesian analysis incorporating a prior preference for small disparities is used to assess the probability of each possible match. Finally, information from the different channels is combined to arrive at a judgement of stimulus disparity. Each model system--stereopsis and motion--has two free parameters: the amount of noise they are subject to, and the strength of their preference for small disparities. By adjusting these parameters independently for each system, qualitative matches are produced to psychophysical data, for both correlated and anti-correlated stimuli, across a range of spatial frequency and orientation bandwidths. The motion model is found to require much higher noise levels and a weaker preference for small disparities. This makes the motion model more tolerant of poor-quality reverse-direction false matches encountered with anti-correlated stimuli, matching the strong perception of reversed motion that humans experience with these stimuli. In contrast, the lower noise level and tighter prior preference used with the stereopsis model means that it performs close to chance with anti-correlated stimuli, in accordance with human psychophysics. Thus, the key features of the experimental data can be reproduced assuming that the motion system experiences more effective noise than the stereoscopy system and imposes a less stringent preference for small disparities.
Neural Correlates of Coherent and Biological Motion Perception in Autism
ERIC Educational Resources Information Center
Koldewyn, Kami; Whitney, David; Rivera, Susan M.
2011-01-01
Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but…
Efference Copy Failure during Smooth Pursuit Eye Movements in Schizophrenia
Dias, Elisa C.; Sanchez, Jamie L.; Schütz, Alexander C.; Javitt, Daniel C.
2013-01-01
Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target (“ball”) would hit/miss a stationary vertical line segment (“goal”). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder. PMID:23864667
Efference copy failure during smooth pursuit eye movements in schizophrenia.
Spering, Miriam; Dias, Elisa C; Sanchez, Jamie L; Schütz, Alexander C; Javitt, Daniel C
2013-07-17
Abnormal smooth pursuit eye movements in patients with schizophrenia are often considered a consequence of impaired motion perception. Here we used a novel motion prediction task to assess the effects of abnormal pursuit on perception in human patients. Schizophrenia patients (n = 15) and healthy controls (n = 16) judged whether a briefly presented moving target ("ball") would hit/miss a stationary vertical line segment ("goal"). To relate prediction performance and pursuit directly, we manipulated eye movements: in half of the trials, observers smoothly tracked the ball; in the other half, they fixated on the goal. Strict quality criteria ensured that pursuit was initiated and that fixation was maintained. Controls were significantly better in trajectory prediction during pursuit than during fixation, their performance increased with presentation duration, and their pursuit gain and perceptual judgments were correlated. Such perceptual benefits during pursuit may be due to the use of extraretinal motion information estimated from an efference copy signal. With an overall lower performance in pursuit and perception, patients showed no such pursuit advantage and no correlation between pursuit gain and perception. Although patients' pursuit showed normal improvement with longer duration, their prediction performance failed to benefit from duration increases. This dissociation indicates relatively intact early visual motion processing, but a failure to use efference copy information. Impaired efference function in the sensory system may represent a general deficit in schizophrenia and thus contribute to symptoms and functional outcome impairments associated with the disorder.
The Perception of Auditory Motion
Leung, Johahn
2016-01-01
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Visual motion disambiguation by a subliminal sound.
Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier
2008-09-01
There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.
A Unified Model of Heading and Path Perception in Primate MSTd
Layton, Oliver W.; Browning, N. Andrew
2014-01-01
Self-motion, steering, and obstacle avoidance during navigation in the real world require humans to travel along curved paths. Many perceptual models have been proposed that focus on heading, which specifies the direction of travel along straight paths, but not on path curvature, which humans accurately perceive and is critical to everyday locomotion. In primates, including humans, dorsal medial superior temporal area (MSTd) has been implicated in heading perception. However, the majority of MSTd neurons respond optimally to spiral patterns, rather than to the radial expansion patterns associated with heading. No existing theory of curved path perception explains the neural mechanisms by which humans accurately assess path and no functional role for spiral-tuned cells has yet been proposed. Here we present a computational model that demonstrates how the continuum of observed cells (radial to circular) in MSTd can simultaneously code curvature and heading across the neural population. Curvature is encoded through the spirality of the most active cell, and heading is encoded through the visuotopic location of the center of the most active cell's receptive field. Model curvature and heading errors fit those made by humans. Our model challenges the view that the function of MSTd is heading estimation, based on our analysis we claim that it is primarily concerned with trajectory estimation and the simultaneous representation of both curvature and heading. In our model, temporal dynamics afford time-history in the neural representation of optic flow, which may modulate its structure. This has far-reaching implications for the interpretation of studies that assume that optic flow is, and should be, represented as an instantaneous vector field. Our results suggest that spiral motion patterns that emerge in spatio-temporal optic flow are essential for guiding self-motion along complex trajectories, and that cells in MSTd are specifically tuned to extract complex trajectory estimation from flow. PMID:24586130
Role of orientation reference selection in motion sickness
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Black, F. Owen
1992-01-01
The overall objective of this proposal is to understand the relationship between human orientation control and motion sickness susceptibility. Three areas related to orientation control will be investigated. These three areas are (1) reflexes associated with the control of eye movements and posture, (2) the perception of body rotation and position with respect to gravity, and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. We refer to this process as sensory selection. This proposal will attempt to quantify subjects' sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms. Measurements of reflexes, motion perception, sensory selection abilities, and motion sickness susceptibility will concentrate on pitch and roll motions since these seem most relevant to the space motion sickness problem. Vestibulo-ocular (VOR) and oculomotor reflexes will be measured using a unique two-axis rotation device developed in our laboratory over the last seven years. Posture control reflexes will be measured using a movable posture platform capable of independently altering proprioceptive and visual orientation cues. Motion perception will be quantified using closed loop feedback technique developed by Zacharias and Young (Exp Brain Res, 1981). This technique requires a subject to null out motions induced by the experimenter while being exposed to various confounding sensory orientation cues. A subject's sensory selection abilities will be measured by the magnitude and timing of his reactions to changes in sensory environments. Motion sickness susceptibility will be measured by the time required to induce characteristic changes in the pattern of electrogastrogram recordings while exposed to various sensory environments during posture and motion perception tests. The results of this work are relevant to NASA's interest in understanding the etiology of space motion sickness. If any of the reflex, perceptual, or sensory selection abilities of subjects are found to correlate with motion sickness susceptibility, this work may be an important step in suggesting a method of predicting motion sickness susceptibility. If sensory selection can provide a means to avoid sensory conflict, then further work may lead to training programs which could enhance a subject's sensory selection ability and therefore minimize motion sickness susceptibility.
Ventral aspect of the visual form pathway is not critical for the perception of biological motion
Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene
2015-01-01
Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504
Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures
Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra
2010-01-01
Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777
Perception of combined translation and rotation in the horizontal plane in humans
2016-01-01
Thresholds and biases of human motion perception were determined for yaw rotation and sway (left-right) and surge (fore-aft) translation, independently and in combination. Stimuli were 1 Hz sinusoid in acceleration with a peak velocity of 14°/s or cm/s. Test stimuli were adjusted based on prior responses, whereas the distracting stimulus was constant. Seventeen human subjects between the ages of 20 and 83 completed the experiments and were divided into 2 groups: younger and older than 50. Both sway and surge translation thresholds significantly increased when combined with yaw rotation. Rotation thresholds were not significantly increased by the presence of translation. The presence of a yaw distractor significantly biased perception of sway translation, such that during 14°/s leftward rotation, the point of subjective equality (PSE) occurred with sway of 3.2 ± 0.7 (mean ± SE) cm/s to the right. Likewise, during 14°/s rightward motion, the PSE was with sway of 2.9 ± 0.7 cm/s to the left. A sway distractor did not bias rotation perception. When subjects were asked to report the direction of translation while varying the axis of yaw rotation, the PSE at which translation was equally likely to be perceived in either direction was 29 ± 11 cm anterior to the midline. These results demonstrated that rotation biased translation perception, such that it is minimized when rotating about an axis anterior to the head. Since the combination of translation and rotation during ambulation is consistent with an axis anterior to the head, this may reflect a mechanism by which movements outside the pattern that occurs during ambulation are perceived. PMID:27334952
EEG theta and Mu oscillations during perception of human and robot actions
Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.
2013-01-01
The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375
EEG theta and Mu oscillations during perception of human and robot actions.
Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P
2013-01-01
The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
Gravity matters: Motion perceptions modified by direction and body position.
Claassen, Jens; Bardins, Stanislavs; Spiegel, Rainer; Strupp, Michael; Kalla, Roger
2016-07-01
Motion coherence thresholds are consistently higher at lower velocities. In this study we analysed the influence of the position and direction of moving objects on their perception and thereby the influence of gravity. This paradigm allows a differentiation to be made between coherent and randomly moving objects in an upright and a reclining position with a horizontal or vertical axis of motion. 18 young healthy participants were examined in this coherent threshold paradigm. Motion coherence thresholds were significantly lower when position and motion were congruent with gravity independent of motion velocity (p=0.024). In the other conditions higher motion coherence thresholds (MCT) were found at lower velocities and vice versa (p<0.001). This result confirms previous studies with higher MCT at lower velocity but is in contrast to studies concerning perception of virtual turns and optokinetic nystagmus, in which differences of perception were due to different directions irrespective of body position, i.e. perception took place in an egocentric reference frame. Since the observed differences occurred in an upright position only, perception of coherent motion in this study is defined by an earth-centered reference frame rather than by an ego-centric frame. Copyright © 2016 Elsevier Inc. All rights reserved.
Perception of linear acceleration in weightlessness
NASA Technical Reports Server (NTRS)
Arrott, A. P.; Young, L. R.
1987-01-01
Eye movements and subjective detection of acceleration were measured on human experimental subjects during vestibular sled acceleration during the D1 Spacelab Mission. Methods and results are reported on the time to detection of small acceleration steps, the threshold for detection of linear acceleration, perceived motion path, and CLOAT. A consistently shorter time to detection of small acceleration steps is found. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth.
Neural correlates of coherent and biological motion perception in autism.
Koldewyn, Kami; Whitney, David; Rivera, Susan M
2011-09-01
Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism. © 2011 Blackwell Publishing Ltd.
Neural correlates of coherent and biological motion perception in autism
Koldewyn, Kami; Whitney, David; Rivera, Susan M.
2011-01-01
Recent evidence suggests those with autism may be generally impaired in visual motion perception. To examine this, we investigated both coherent and biological motion processing in adolescents with autism employing both psychophysical and fMRI methods. Those with autism performed as well as matched controls during coherent motion perception but had significantly higher thresholds for biological motion perception. The autism group showed reduced posterior Superior Temporal Sulcus (pSTS), parietal and frontal activity during a biological motion task while showing similar levels of activity in MT+/V5 during both coherent and biological motion trials. Activity in MT+/V5 was predictive of individual coherent motion thresholds in both groups. Activity in dorsolateral prefrontal cortex (DLPFC) and pSTS was predictive of biological motion thresholds in control participants but not in those with autism. Notably, however, activity in DLPFC was negatively related to autism symptom severity. These results suggest that impairments in higher-order social or attentional networks may underlie visual motion deficits observed in autism. PMID:21884323
Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George
2017-08-15
A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.
The processing of social stimuli in early infancy: from faces to biological motion perception.
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
2011-01-01
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
Dynamical evolution of motion perception.
Kanai, Ryota; Sheth, Bhavin R; Shimojo, Shinsuke
2007-03-01
Motion is defined as a sequence of positional changes over time. However, in perception, spatial position and motion dynamically interact with each other. This reciprocal interaction suggests that the perception of a moving object itself may dynamically evolve following the onset of motion. Here, we show evidence that the percept of a moving object systematically changes over time. In experiments, we introduced a transient gap in the motion sequence or a brief change in some feature (e.g., color or shape) of an otherwise smoothly moving target stimulus. Observers were highly sensitive to the gap or transient change if it occurred soon after motion onset (< or =200 ms), but significantly less so if it occurred later (> or = 300 ms). Our findings suggest that the moving stimulus is initially perceived as a time series of discrete potentially isolatable frames; later failures to perceive change suggests that over time, the stimulus begins to be perceived as a single, indivisible gestalt integrated over space as well as time, which could well be the signature of an emergent stable motion percept.
Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin
2017-01-01
Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
The Vestibular System and Human Dynamic Space Orientation
NASA Technical Reports Server (NTRS)
Meiry, J. L.
1966-01-01
The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.
Corina, David P; Knapp, Heather Patterson
2008-12-01
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
The effect of age upon the perception of 3-D shape from motion.
Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B
2013-12-18
Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.
The Default Mode Network Differentiates Biological From Non-Biological Motion.
Dayan, Eran; Sella, Irit; Mukovskiy, Albert; Douek, Yehonatan; Giese, Martin A; Malach, Rafael; Flash, Tamar
2016-01-01
The default mode network (DMN) has been implicated in an array of social-cognitive functions, including self-referential processing, theory of mind, and mentalizing. Yet, the properties of the external stimuli that elicit DMN activity in relation to these domains remain unknown. Previous studies suggested that motion kinematics is utilized by the brain for social-cognitive processing. Here, we used functional MRI to examine whether the DMN is sensitive to parametric manipulations of observed motion kinematics. Preferential responses within core DMN structures differentiating non-biological from biological kinematics were observed for the motion of a realistically looking, human-like avatar, but not for an abstract object devoid of human form. Differences in connectivity patterns during the observation of biological versus non-biological kinematics were additionally observed. Finally, the results additionally suggest that the DMN is coupled more strongly with key nodes in the action observation network, namely the STS and the SMA, when the observed motion depicts human rather than abstract form. These findings are the first to implicate the DMN in the perception of biological motion. They may reflect the type of information used by the DMN in social-cognitive processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gertz, Hanna; Hilger, Maximilian; Hegele, Mathias; Fiehler, Katja
2016-09-01
Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception-action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial-frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. Copyright © 2016 Elsevier Inc. All rights reserved.
Self-motion perception: assessment by real-time computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Phillips, J. O.
2001-01-01
We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.
Visual event-related potentials to biological motion stimuli in autism spectrum disorders
Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan
2014-01-01
Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808
Yu, Tzu-Ying; Jacobs, Robert J.; Anstice, Nicola S.; Paudel, Nabin; Harding, Jane E.; Thompson, Benjamin
2013-01-01
Purpose. We developed and validated a technique for measuring global motion perception in 2-year-old children, and assessed the relationship between global motion perception and other measures of visual function. Methods. Random dot kinematogram (RDK) stimuli were used to measure motion coherence thresholds in 366 children at risk of neurodevelopmental problems at 24 ± 1 months of age. RDKs of variable coherence were presented and eye movements were analyzed offline to grade the direction of the optokinetic reflex (OKR) for each trial. Motion coherence thresholds were calculated by fitting psychometric functions to the resulting datasets. Test–retest reliability was assessed in 15 children, and motion coherence thresholds were measured in a group of 10 adults using OKR and behavioral responses. Standard age-appropriate optometric tests also were performed. Results. Motion coherence thresholds were measured successfully in 336 (91.8%) children using the OKR technique, but only 31 (8.5%) using behavioral responses. The mean threshold was 41.7 ± 13.5% for 2-year-old children and 3.3 ± 1.2% for adults. Within-assessor reliability and test–retest reliability were high in children. Children's motion coherence thresholds were significantly correlated with stereoacuity (LANG I & II test, ρ = 0.29, P < 0.001; Frisby, ρ = 0.17, P = 0.022), but not with binocular visual acuity (ρ = 0.11, P = 0.07). In adults OKR and behavioral motion coherence thresholds were highly correlated (intraclass correlation = 0.81, P = 0.001). Conclusions. Global motion perception can be measured in 2-year-old children using the OKR. This technique is reliable and data from adults suggest that motion coherence thresholds based on the OKR are related to motion perception. Global motion perception was related to stereoacuity in children. PMID:24282224
Modulation frequency as a cue for auditory speed perception.
Senna, Irene; Parise, Cesare V; Ernst, Marc O
2017-07-12
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Lu, Zhong-Lin; Sperling, George
2002-10-01
Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America
Binocular eye movement control and motion perception: what is being tracked?
van der Steen, Johannes; Dits, Joyce
2012-10-19
We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.
Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
van der Steen, Johannes; Dits, Joyce
2012-01-01
Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286
The upper spatial limit for perception of displacement is affected by preceding motion.
Stefanova, Miroslava; Mateeff, Stefan; Hohnsbein, Joachim
2009-03-01
The upper spatial limit D(max) for perception of apparent motion of a random dot pattern may be strongly affected by another, collinear, motion that precedes it [Mateeff, S., Stefanova, M., &. Hohnsbein, J. (2007). Perceived global direction of a compound of real and apparent motion. Vision Research, 47, 1455-1463]. In the present study this phenomenon was studied with two-dimensional motion stimuli. A random dot pattern moved alternately in the vertical and oblique direction (zig-zag motion). The vertical motion was of 1.04 degrees length; it was produced by three discrete spatial steps of the dots. Thereafter the dots were displaced by a single spatial step in oblique direction. Each motion lasted for 57ms. The upper spatial limit for perception of the oblique motion was measured under two conditions: the vertical component of the oblique motion and the vertical motion were either in the same or in opposite directions. It was found that the perception of the oblique motion was strongly influenced by the relative direction of the vertical motion that preceded it; in the "same" condition the upper spatial limit was much shorter than in the "opposite" condition. Decreasing the speed of the vertical motion reversed this effect. Interpretations based on networks of motion detectors and on Gestalt theory are discussed.
Influence of visual path information on human heading perception during rotation.
Li, Li; Chen, Jing; Peng, Xiaozhe
2009-03-31
How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 x 94 degrees, 48 x 41 degrees, 16 x 14 degrees, 8 x 7 degrees) and depth ranges (6-50 m, 6-25 m, 6-12.5 m, 6-9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.
IQ Predicts Biological Motion Perception in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Rutherford, M. D.; Troje, Nikolaus F.
2012-01-01
Biological motion is easily perceived by neurotypical observers when encoded in point-light displays. Some but not all relevant research shows significant deficits in biological motion perception among those with ASD, especially with respect to emotional displays. We tested adults with and without ASD on the perception of masked biological motion…
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.
Autogenic-feedback training - A treatment for motion and space sickness
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.
1990-01-01
A training method for preventing the occurrence of motion sickness in humans, called autogenic-feedback training (AFT), is described. AFT is based on a combination of biofeedback and autogenic therapy which involves training physiological self-regulation as an alternative to pharmacological management. AFT was used to reliably increase tolerance to motion-sickness-inducing tests in both men and women ranging in age from 18 to 54 years. The effectiveness of AFT is found to be significantly higher than that of protective adaptation training. Data obtained show that there is no apparent effect from AFT on measures of vestibular perception and no side effects.
Coherent modulation of stimulus colour can affect visually induced self-motion perception.
Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji
2010-01-01
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.
Contextual effects on motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2008-08-15
Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.
Transformation-aware perceptual image metric
NASA Astrophysics Data System (ADS)
Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter
2016-09-01
Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax.
Color and luminance in the perception of 1- and 2-dimensional motion.
Farell, B
1999-08-01
An isoluminant color grating usually appears to move more slowly than a luminance grating that has the same physical speed. Yet a grating defined by both color and luminance is seen as perceptually unified and moving at a single intermediate speed. In experiments measuring perceived speed and direction, it was found that color- and luminance-based motion signals are combined differently in the perception of 1-D motion than they are in the perception of 2-D motion. Adding color to a moving 1-D luminance pattern, a grating, slows its perceived speed. Adding color to a moving 2-D luminance pattern, a plaid made of orthogonal gratings, leaves its perceived speed unchanged. Analogous results occur for the perception of the direction of 2-D motion. The visual system appears to discount color when analyzing the motion of luminance-bearing 2-D patterns. This strategy has adaptive advantages, making the sensing of object motion more veridical without sacrificing the ability to see motion at isoluminance.
Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin
2017-06-01
Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deficient motion-defined and texture-defined figure-ground segregation in amblyopic children.
Wang, Jane; Ho, Cindy S; Giaschi, Deborah E
2007-01-01
Motion-defined form deficits in the fellow eye and the amblyopic eye of children with amblyopia implicate possible direction-selective motion processing or static figure-ground segregation deficits. Deficient motion-defined form perception in the fellow eye of amblyopic children may not be fully accounted for by a general motion processing deficit. This study investigates the contribution of figure-ground segregation deficits to the motion-defined form perception deficits in amblyopia. Performances of 6 amblyopic children (5 anisometropic, 1 anisostrabismic) and 32 control children with normal vision were assessed on motion-defined form, texture-defined form, and global motion tasks. Performance on motion-defined and texture-defined form tasks was significantly worse in amblyopic children than in control children. Performance on global motion tasks was not significantly different between the 2 groups. Faulty figure-ground segregation mechanisms are likely responsible for the observed motion-defined form perception deficits in amblyopia.
Neural network architecture for form and motion perception (Abstract Only)
NASA Astrophysics Data System (ADS)
Grossberg, Stephen
1991-08-01
Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.
Embodied learning of a generative neural model for biological motion perception and inference
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215
Embodied learning of a generative neural model for biological motion perception and inference.
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.
Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M
2013-04-01
Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR.
Pettorossi, V E; Panichi, R; Botti, F M; Kyriakareli, A; Ferraresi, A; Faralli, M; Schieppati, M; Bronstein, A M
2013-01-01
Self-motion perception and the vestibulo-ocular reflex (VOR) were investigated in healthy subjects during asymmetric whole body yaw plane oscillations while standing on a platform in the dark. Platform oscillation consisted of two half-sinusoidal cycles of the same amplitude (40°) but different duration, featuring a fast (FHC) and a slow half-cycle (SHC). Rotation consisted of four or 20 consecutive cycles to probe adaptation further with the longer duration protocol. Self-motion perception was estimated by subjects tracking with a pointer the remembered position of an earth-fixed visual target. VOR was measured by electro-oculography. The asymmetric stimulation pattern consistently induced a progressive increase of asymmetry in motion perception, whereby the gain of the tracking response gradually increased during FHCs and decreased during SHCs. The effect was observed already during the first few cycles and further increased during 20 cycles, leading to a totally distorted location of the initial straight-ahead. In contrast, after some initial interindividual variability, the gain of the slow phase VOR became symmetric, decreasing for FHCs and increasing for SHCs. These oppositely directed adaptive effects in motion perception and VOR persisted for nearly an hour. Control conditions using prolonged but symmetrical stimuli produced no adaptive effects on either motion perception or VOR. These findings show that prolonged asymmetric activation of the vestibular system leads to opposite patterns of adaptation of self-motion perception and VOR. The results provide strong evidence that semicircular canal inputs are processed centrally by independent mechanisms for perception of body motion and eye movement control. These divergent adaptation mechanisms enhance awareness of movement toward the faster body rotation, while improving the eye stabilizing properties of the VOR. PMID:23318876
Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo
2015-01-01
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.
Destephe, Matthieu; Brandao, Martim; Kishi, Tatsuhiro; Zecca, Massimiliano; Hashimoto, Kenji; Takanishi, Atsuo
2015-01-01
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society. PMID:25762967
Criterion-free measurement of motion transparency perception at different speeds
Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.
2018-01-01
Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154
Buchanan, John J
2016-01-01
The primary goal of this chapter is to merge together the visual perception perspective of observational learning and the coordination dynamics theory of pattern formation in perception and action. Emphasis is placed on identifying movement features that constrain and inform action-perception and action-production processes. Two sources of visual information are examined, relative motion direction and relative phase. The visual perception perspective states that the topological features of relative motion between limbs and joints remains invariant across an actor's motion and therefore are available for pickup by an observer. Relative phase has been put forth as an informational variable that links perception to action within the coordination dynamics theory. A primary assumption of the coordination dynamics approach is that environmental information is meaningful only in terms of the behavior it modifies. Across a series of single limb tasks and bimanual tasks it is shown that the relative motion and relative phase between limbs and joints is picked up through visual processes and supports observational learning of motor skills. Moreover, internal estimations of motor skill proficiency and competency are linked to the informational content found in relative motion and relative phase. Thus, the chapter links action to perception and vice versa and also links cognitive evaluations to the coordination dynamics that support action-perception and action-production processes.
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Anisotropic responses to motion toward and away from the eye
NASA Technical Reports Server (NTRS)
Perrone, John A.
1986-01-01
When a rigid object moves toward the eye, it is usually perceived as being rigid. However, in the case of motion away from the eye, the motion and structure of the object are perceived nonveridically, with the percept tending to reflect the nonrigid transformations that are present in the retinal image. This difference in response to motion to and from the observer was quantified in an experiment using wire-frame computer-generated boxes which moved toward and away from the eye. Two theoretical systems are developed by which uniform three-dimensional velocity can be recovered from an expansion pattern of nonuniform velocity vectors. It is proposed that the human visual system uses two similar systems for processing motion in depth. The mechanism used for motion away from the eye produces perceptual errors because it is not suited to objects with a depth component.
Nakamura, S; Shimojo, S
2000-01-01
We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.
Perception of Visual Speed While Moving
ERIC Educational Resources Information Center
Durgin, Frank H.; Gigone, Krista; Scott, Rebecca
2005-01-01
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…
Velocity storage contribution to vestibular self-motion perception in healthy human subjects.
Bertolini, G; Ramat, S; Laurens, J; Bockisch, C J; Marti, S; Straumann, D; Palla, A
2011-01-01
Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Martin, Alex
2016-08-01
In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed.
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J
2018-03-21
Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post-amputation (e.g., improving prosthesis embodiment when limb representation is constrained by the same limits as an intact limb). Copyright © 2018 Elsevier Ltd. All rights reserved.
Contribution of self-motion perception to acoustic target localization.
Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D
2005-05-01
The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.
Speed Biases With Real-Life Video Clips
Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875
Speed Biases With Real-Life Video Clips.
Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.
Coherence Motion Perception in Developmental Dyslexia: A Meta-Analysis of Behavioral Studies
ERIC Educational Resources Information Center
Benassi, Mariagrazia; Simonelli, Letizia; Giovagnoli, Sara; Bolzani, Roberto
2010-01-01
The magnitude of the association between developmental dyslexia (DD) and motion sensitivity is evaluated in 35 studies, which investigated coherence motion perception in DD. A first analysis is conducted on the differences between DD groups and age-matched control (C) groups. In a second analysis, the relationship between motion coherence…
Self and world: large scale installations at science museums.
Shimojo, Shinsuke
2008-01-01
This paper describes three examples of illusion installation in a science museum environment from the author's collaboration with the artist and architect. The installations amplify the illusory effects, such as vection (visually-induced sensation of self motion) and motion-induced blindness, to emphasize that perception is not just to obtain structure and features of objects, but rather to grasp the dynamic relationship between the self and the world. Scaling up the size and utilizing the live human body turned out to be keys for installations with higher emotional impact.
Phantom motion after effects--evidence of detectors for the analysis of optic flow.
Snowden, R J; Milne, A B
1997-10-01
Electrophysiological recording from the extrastriate cortex of non-human primates has revealed neurons that have large receptive fields and are sensitive to various components of object or self movement, such as translations, rotations and expansion/contractions. If these mechanisms exist in human vision, they might be susceptible to adaptation that generates motion aftereffects (MAEs). Indeed, it might be possible to adapt the mechanism in one part of the visual field and reveal what we term a 'phantom MAE' in another part. The existence of phantom MAEs was probed by adapting to a pattern that contained motion in only two non-adjacent 'quarter' segments and then testing using patterns that had elements in only the other two segments. We also tested for the more conventional 'concrete' MAE by testing in the same two segments that had adapted. The strength of each MAE was quantified by measuring the percentage of dots that had to be moved in the opposite direction to the MAE in order to nullify it. Four experiments tested rotational motion, expansion/contraction motion, translational motion and a 'rotation' that consisted simply of the two segments that contained only translational motions of opposing direction. Compared to a baseline measurement where no adaptation took place, all subjects in all experiments exhibited both concrete and phantom MAEs, with the size of the latter approximately half that of the former. Adaptation to two segments that contained upward and downward motion induced the perception of leftward and rightward motion in another part of the visual field. This strongly suggests there are mechanisms in human vision that are sensitive to complex motions such as rotations.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Cross-Category Adaptation: Objects Produce Gender Adaptation in the Perception of Faces
Javadi, Amir Homayoun; Wee, Natalie
2012-01-01
Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes. PMID:23049942
Matsumoto, Yukiko; Takahashi, Hideyuki; Murai, Toshiya; Takahashi, Hidehiko
2015-01-01
Schizophrenia patients have impairments at several levels of cognition including visual attention (eye movements), perception, and social cognition. However, it remains unclear how lower-level cognitive deficits influence higher-level cognition. To elucidate the hierarchical path linking deficient cognitions, we focused on biological motion perception, which is involved in both the early stage of visual perception (attention) and higher social cognition, and is impaired in schizophrenia. Seventeen schizophrenia patients and 18 healthy controls participated in the study. Using point-light walker stimuli, we examined eye movements during biological motion perception in schizophrenia. We assessed relationships among eye movements, biological motion perception and empathy. In the biological motion detection task, schizophrenia patients showed lower accuracy and fixated longer than healthy controls. As opposed to controls, patients exhibiting longer fixation durations and fewer numbers of fixations demonstrated higher accuracy. Additionally, in the patient group, the correlations between accuracy and affective empathy index and between eye movement index and affective empathy index were significant. The altered gaze patterns in patients indicate that top-down attention compensates for impaired bottom-up attention. Furthermore, aberrant eye movements might lead to deficits in biological motion perception and finally link to social cognitive impairments. The current findings merit further investigation for understanding the mechanism of social cognitive training and its development. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion
Niehorster, Diederick C.
2017-01-01
How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Scocchia, Lisa; Bolognini, Nadia; Convento, Silvia; Stucchi, Natale
2015-11-16
Human movements conform to specific kinematic laws of motion. One of such laws, the "two-thirds power law", describes the systematic co-variation between curvature and velocity of body movements. Noticeably, the same law also influences the perception of moving stimuli: the velocity of a dot moving along a curvilinear trajectory is perceived as uniform when the dot kinematics complies with the two-thirds power law. Instead, if the dot moves at constant speed, its velocity is perceived as highly non-uniform. This dynamic visual illusion points to a strong coupling between action and perception; however, how this coupling is implemented in the brain remains elusive. In this study, we tested whether the premotor cortex (PM) and the primary visual cortex (V1) play a role in the illusion by means of transcranial Direct Current Stimulation (tDCS). All participants underwent three tDCS sessions during which they received active or sham cathodal tDCS (1.5mA) over PM or V1 of the left hemisphere. During tDCS, participants were required to adjust the velocity of a dot moving along an elliptical trajectory until it looked uniform across the whole trajectory. Results show that occipital tDCS decreases the illusion variability both within and across participants, as compared to sham tDCS. This means that V1 stimulation increases individual sensitivity to the illusory motion and also increases coherence across different observers. Conversely, the illusion seems resistant to tDCS in terms of its magnitude, with cathodal stimulation of V1 or PM not affecting the amount of the illusory effect. Our results provide evidence for strong visuo-motor coupling in visual perception: the velocity of a dot moving along an elliptical trajectory is perceived as uniform only when its kinematics closely complies to the same law of motion that constrains human movement production. Occipital stimulation by cathodal tDCS can stabilize such illusory percept. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Visual-vestibular integration as a function of adaptation to space flight and return to Earth
NASA Technical Reports Server (NTRS)
Reschke, Millard R.; Bloomberg, Jacob J.; Harm, Deborah L.; Huebner, William P.; Krnavek, Jody M.; Paloski, William H.; Berthoz, Alan
1999-01-01
Research on perception and control of self-orientation and self-motion addresses interactions between action and perception . Self-orientation and self-motion, and the perception of that orientation and motion are required for and modified by goal-directed action. Detailed Supplementary Objective (DSO) 604 Operational Investigation-3 (OI-3) was designed to investigate the integrated coordination of head and eye movements within a structured environment where perception could modify responses and where response could be compensatory for perception. A full understanding of this coordination required definition of spatial orientation models for the microgravity environment encountered during spaceflight.
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Motion perception without Nystagmus--a novel manifestation of cerebellar stroke.
Shaikh, Aasef G
2014-01-01
The motion perception and the vestibulo-ocular reflex (VOR) each serve distinct functions. The VOR keeps the gaze steady on the target of interest, whereas vestibular perception serves a number of tasks, including awareness of self-motion and orientation in space. VOR and motion perception might abide the same neurophysiological principles, but their distinct anatomical correlates were proposed. In patients with cerebellar stroke in distribution of medial division of posterior inferior cerebellar artery, we asked whether specific location of the focal lesion in vestibulocerebellum could cause impaired perception of motion but normal eye movements. Thirteen patients were studied, 5 consistently perceived spinning of surrounding environment (vertigo), but the eye movements were normal. This group was called "disease model." Remaining 8 patients were also symptomatic for vertigo, but they had spontaneous nystagmus. The latter group was called "disease control." Magnetic resonance imaging in both groups consistently revealed focal cerebellar infarct affecting posterior cerebellar vermis (lobule IX). In the "disease model" group, only part of lobule IX was affected. In the disease control group, however, complete lobule IX was involved. This study discovered a novel presentation of cerebellar stroke where only motion perception was affected, but there was an absence of objective neurologic signs. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Rutqvist, Jonny; Cappa, Frédéric; Rinaldi, Antonio P.; ...
2014-05-01
In this paper, we present model simulations of ground motions caused by CO 2 -injection-induced fault reactivation and analyze the results in terms of the potential for damage to ground surface structures and nuisance to the local human population. It is an integrated analysis from cause to consequence, including the whole chain of processes starting from earthquake inception in the subsurface, wave propagation toward the ground surface, and assessment of the consequences of ground vibration. For a small magnitude (M w =3) event at a hypocenter depth of about 1000m, we first used the simulated ground-motion wave train in anmore » inverse analysis to estimate source parameters (moment magnitude, rupture dimensions and stress drop), achieving good agreement and thereby verifying the modeling of the chain of processes from earthquake inception to ground vibration. We then analyzed the ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV) and frequency content, with comparison to U.S. Geological Survey's instrumental intensity scales for earthquakes and the U.S. Bureau of Mines' vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. Our results confirm the appropriateness of using PGV (rather than PGA) and frequency for the evaluation of potential ground-vibration effects on structures and humans from shallow injection-induced seismic events. For the considered synthetic M w =3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, but would certainly be felt by the local population.« less
Video-Based Method of Quantifying Performance and Instrument Motion During Simulated Phonosurgery
Conroy, Ellen; Surender, Ketan; Geng, Zhixian; Chen, Ting; Dailey, Seth; Jiang, Jack
2015-01-01
Objectives/Hypothesis To investigate the use of the Video-Based Phonomicrosurgery Instrument Tracking System to collect instrument position data during simulated phonomicrosurgery and calculate motion metrics using these data. We used this system to determine if novice subject motion metrics improved over 1 week of training. Study Design Prospective cohort study. Methods Ten subjects performed simulated surgical tasks once per day for 5 days. Instrument position data were collected and used to compute motion metrics (path length, depth perception, and motion smoothness). Data were analyzed to determine if motion metrics improved with practice time. Task outcome was also determined each day, and relationships between task outcome and motion metrics were used to evaluate the validity of motion metrics as indicators of surgical performance. Results Significant decreases over time were observed for path length (P <.001), depth perception (P <.001), and task outcome (P <.001). No significant change was observed for motion smoothness. Significant relationships were observed between task outcome and path length (P <.001), depth perception (P <.001), and motion smoothness (P <.001). Conclusions Our system can estimate instrument trajectory and provide quantitative descriptions of surgical performance. It may be useful for evaluating phonomicrosurgery performance. Path length and depth perception may be particularly useful indicators. PMID:24737286
A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.
Ratzlaff, Michael; Nawrot, Mark
2016-09-01
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.
The shaping of social perception by stimulus and knowledge cues to human animacy
Ramsey, Richard; Liepelt, Roman; Prinz, Wolfgang; Hamilton, Antonia F. de C.
2016-01-01
Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self–other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design. PMID:26644594
The shaping of social perception by stimulus and knowledge cues to human animacy.
Cross, Emily S; Ramsey, Richard; Liepelt, Roman; Prinz, Wolfgang; de C Hamilton, Antonia F
2016-01-19
Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self-other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design. © 2015 The Authors.
Technology evaluation of man-rated acceleration test equipment for vestibular research
NASA Technical Reports Server (NTRS)
Taback, I.; Kenimer, R. L.; Butterfield, A. J.
1983-01-01
The considerations for eliminating acceleration noise cues in horizontal, linear, cyclic-motion sleds intended for both ground and shuttle-flight applications are addressed. the principal concerns are the acceleration transients associated with change in direction-of-motion for the carriage. The study presents a design limit for acceleration cues or transients based upon published measurements for thresholds of human perception to linear cyclic motion. The sources and levels for motion transients are presented based upon measurements obtained from existing sled systems. The approaches to a noise-free system recommends the use of air bearings for the carriage support and moving-coil linear induction motors operating at low frequency as the drive system. Metal belts running on air bearing pulleys provide an alternate approach to the driving system. The appendix presents a discussion of alternate testing techniques intended to provide preliminary type data by means of pendulums, linear motion devices and commercial air bearing tables.
An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity
Shaikh, Danish; Manoonpong, Poramate
2017-01-01
Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking. PMID:28337137
Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D
Holly, Jan E.; Harmon, Katharine J.
2009-01-01
INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199
Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation
NASA Technical Reports Server (NTRS)
Peterka, Robert J.
1995-01-01
The objective is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness.
Passive motion reduces vestibular balance and perceptual responses
Fitzpatrick, Richard C; Watson, Shaun R D
2015-01-01
With the hypothesis that vestibular sensitivity is regulated to deal with a range of environmental motion conditions, we explored the effects of passive whole-body motion on vestibular perceptual and balance responses. In 10 subjects, vestibular responses were measured before and after a period of imposed passive motion. Vestibulospinal balance reflexes during standing evoked by galvanic vestibular stimulation (GVS) were measured as shear reaction forces. Perceptual tests measured thresholds for detecting angular motion, perceptions of suprathreshold rotation and perceptions of GVS-evoked illusory rotation. The imposed conditioning motion was 10 min of stochastic yaw rotation (0.5–2.5 Hz ≤ 300 deg s−2) with subjects seated. This conditioning markedly reduced reflexive and perceptual responses. The medium latency galvanic reflex (300–350 ms) was halved in amplitude (48%; P = 0.011) but the short latency response was unaffected. Thresholds for detecting imposed rotation more than doubled (248%; P < 0.001) and remained elevated after 30 min. Over-estimation of whole-body rotation (30–180 deg every 5 s) before conditioning was significantly reduced (41.1 to 21.5%; P = 0.033). Conditioning reduced illusory vestibular sensations of rotation evoked by GVS (mean 113 deg for 10 s at 1 mA) by 44% (P < 0.01) and the effect persisted for at least 1 h (24% reduction; P < 0.05). We conclude that a system of vestibular sensory autoregulation exists and that this probably involves central and peripheral mechanisms, possibly through vestibular efferent regulation. We propose that failure of these regulatory mechanisms at different levels could lead to disorders of movement perception and balance control during standing. Key points Human activity exposes the vestibular organs to a wide dynamic range of motion. We aimed to discover whether the CNS regulates sensitivity to vestibular afference during exposure to ambient motion. Balance and perceptual responses to vestibular stimulation were measured before and after a 10 min period of imposed, moderate intensity, stochastic whole-body rotation. After this conditioning, vestibular balance reflexes evoked by galvanic vestibular stimulation were halved in amplitude. Conditioning doubled the thresholds for perceiving small rotations, and reduced perceptions of the amplitude of real rotations, and illusory rotation evoked by galvanic stimulation. We conclude that the CNS auto-regulates sensitivity to vestibular sensory afference and that this probably involves central and peripheral mechanisms, as might arise from vestibular efferent regulation. Failure of these regulatory mechanisms at different levels could lead to disorders of movement perception and balance control during standing. PMID:25809702
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
The 14th Annual Conference on Manual Control. [digital simulation of human operator dynamics
NASA Technical Reports Server (NTRS)
1978-01-01
Human operator dynamics during actual manual control or while monitoring the automatic control systems involved in air-to-air tracking, automobile driving, the operator of undersea vehicles, and remote handling are examined. Optimal control models and the use of mathematical theory in representing man behavior in complex man machine system tasks are discussed with emphasis on eye/head tracking and scanning; perception and attention allocation; decision making; and motion simulation and effects.
NASA Technical Reports Server (NTRS)
Reschke, M. F.; Parker, D. E.; Arrott, A. P.
1986-01-01
Report discusses physiological and physical concepts of proposed training system to precondition astronauts to weightless environment. System prevents motion sickness, often experienced during early part of orbital flight. Also helps prevent seasickness and other forms of terrestrial motion sickness, often experienced during early part of orbital flight. Training affects subject's perception of inner-ear signals, visual signals, and kinesthetic motion perception. Changed perception resembles that of astronauts who spent many days in space and adapted to weightlessness.
Orientation of selective effects of body tilt on visually induced perception of self-motion.
Nakamura, S; Shimojo, S
1998-10-01
We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds.
Wright, W Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Human Factors in Virtual Reality Development
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Proffitt, Dennis R.; Null, Cynthia H. (Technical Monitor)
1995-01-01
This half-day tutorial will provide an overview of basic perceptual functioning as it relates to the design of virtual environment systems. The tutorial consists of three parts. First, basic issues in visual perception will be presented, including discussions of the visual sensations of brightness and color, and the visual perception of depth relationships in three-dimensional space (with a special emphasis on motion -specified depth). The second section will discuss the importance of conducting human-factors user studies and evaluations. Examples and suggestions on how best to get help with user studies will be provided. Finally, we will discuss how, by drawing on their complementary competencies, perceptual psychologists and computer engineers can work as a team to develop optimal VR systems, technologies, and techniques.
Martin, Alex
2016-01-01
In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed. PMID:25968087
Human Guidance Behavior Decomposition and Modeling
NASA Astrophysics Data System (ADS)
Feit, Andrew James
Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.
Rutqvist, Jonny; Cappa, Frederic; Rinaldi, Antonio P.; ...
2014-12-31
We summarize recent modeling studies of injection-induced fault reactivation, seismicity, and its potential impact on surface structures and nuisance to the local human population. We used coupled multiphase fluid flow and geomechanical numerical modeling, dynamic wave propagation modeling, seismology theories, and empirical vibration criteria from mining and construction industries. We first simulated injection-induced fault reactivation, including dynamic fault slip, seismic source, wave propagation, and ground vibrations. From co-seismic average shear displacement and rupture area, we determined the moment magnitude to about M w = 3 for an injection-induced fault reactivation at a depth of about 1000 m. We then analyzedmore » the ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV), and frequency content, with comparison to the U.S. Bureau of Mines’ vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. For the considered synthetic M w = 3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, and would not cause, in this particular case, upward CO 2 leakage, but would certainly be felt by the local population.« less
A Nonlinear, Human-Centered Approach to Motion Cueing with a Neurocomputing Solver
NASA Technical Reports Server (NTRS)
Telban, Robert J.; Cardullo, Frank M.; Houck, Jacob A.
2002-01-01
This paper discusses the continuation of research into the development of new motion cueing algorithms first reported in 1999. In this earlier work, two viable approaches to motion cueing were identified: the coordinated adaptive washout algorithm or 'adaptive algorithm', and the 'optimal algorithm'. In this study, a novel approach to motion cueing is discussed that would combine features of both algorithms. The new algorithm is formulated as a linear optimal control problem, incorporating improved vestibular models and an integrated visual-vestibular motion perception model previously reported. A control law is generated from the motion platform states, resulting in a set of nonlinear cueing filters. The time-varying control law requires the matrix Riccati equation to be solved in real time. Therefore, in order to meet the real time requirement, a neurocomputing approach is used to solve this computationally challenging problem. Single degree-of-freedom responses for the nonlinear algorithm were generated and compared to the adaptive and optimal algorithms. Results for the heave mode show the nonlinear algorithm producing a motion cue with a time-varying washout, sustaining small cues for a longer duration and washing out larger cues more quickly. The addition of the optokinetic influence from the integrated perception model was shown to improve the response to a surge input, producing a specific force response with no steady-state washout. Improved cues are also observed for responses to a sway input. Yaw mode responses reveal that the nonlinear algorithm improves the motion cues by reducing the magnitude of negative cues. The effectiveness of the nonlinear algorithm as compared to the adaptive and linear optimal algorithms will be evaluated on a motion platform, the NASA Langley Research Center Visual Motion Simulator (VMS), and ultimately the Cockpit Motion Facility (CMF) with a series of pilot controlled maneuvers. A proposed experimental procedure is discussed. The results of this evaluation will be used to assess motion cueing performance.
Curvilinear approach to an intersection and visual detection of a collision.
Berthelon, C; Mestre, D
1993-09-01
Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.
Perception of Social Interactions for Spatially Scrambled Biological Motion
Thurman, Steven M.; Lu, Hongjing
2014-01-01
It is vitally important for humans to detect living creatures in the environment and to analyze their behavior to facilitate action understanding and high-level social inference. The current study employed naturalistic point-light animations to examine the ability of human observers to spontaneously identify and discriminate socially interactive behaviors between two human agents. Specifically, we investigated the importance of global body form, intrinsic joint movements, extrinsic whole-body movements, and critically, the congruency between intrinsic and extrinsic motions. Motion congruency is hypothesized to be particularly important because of the constraint it imposes on naturalistic action due to the inherent causal relationship between limb movements and whole body motion. Using a free response paradigm in Experiment 1, we discovered that many naïve observers (55%) spontaneously attributed animate and/or social traits to spatially-scrambled displays of interpersonal interaction. Total stimulus motion energy was strongly correlated with the likelihood that an observer would attribute animate/social traits, as opposed to physical/mechanical traits, to the scrambled dot stimuli. In Experiment 2, we found that participants could identify interactions between spatially-scrambled displays of human dance as long as congruency was maintained between intrinsic/extrinsic movements. Violating the motion congruency constraint resulted in chance discrimination performance for the spatially-scrambled displays. Finally, Experiment 3 showed that scrambled point-light dancing animations violating this constraint were also rated as significantly less interactive than animations with congruent intrinsic/extrinsic motion. These results demonstrate the importance of intrinsic/extrinsic motion congruency for biological motion analysis, and support a theoretical framework in which early visual filters help to detect animate agents in the environment based on several fundamental constraints. Only after satisfying these basic constraints could stimuli be evaluated for high-level social content. In this way, we posit that perceptual animacy may serve as a gateway to higher-level processes that support action understanding and social inference. PMID:25406075
Effects of spatial cues on color-change detection in humans
Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.
2015-01-01
Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359
Perceptual learning modifies untrained pursuit eye movements.
Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa
2014-07-07
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.
Perceptual learning modifies untrained pursuit eye movements
Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa
2014-01-01
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412
Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia
ERIC Educational Resources Information Center
Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue
2011-01-01
Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
Ma, Yingliang; Paterson, Helena M; Pollick, Frank E
2006-02-01
We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.
Perception of Biological Motion in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Freitag, Christine M.; Konrad, Carsten; Haberlen, Melanie; Kleser, Christina; von Gontard, Alexander; Reith, Wolfgang; Troje, Nikolaus F.; Krick, Christoph
2008-01-01
In individuals with autism or autism-spectrum-disorder (ASD), conflicting results have been reported regarding the processing of biological motion tasks. As biological motion perception and recognition might be related to impaired imitation, gross motor skills and autism specific psychopathology in individuals with ASD, we performed a functional…
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Neural Correlates of Human Action Observation in Hearing and Deaf Subjects
Corina, David; Chiu, Yi-Shiuan; Knapp, Heather; Greenwald, Ralf; Jose-Robertson, Lucia San; Braun, Allen
2007-01-01
Accumulating evidence has suggested the existence of a human action recognition system involving inferior frontal, parietal, and superior temporal regions that may participate in both the perception and execution of actions. However, little is known about the specificity of this system in response to different forms of human action. Here we present data from PET neuroimaging studies from passive viewing of three distinct action types, intransitive self-oriented actions (e.g., stretching, rubbing one’s eyes, etc.), transitive object-oriented actions (e.g., opening a door, lifting a cup to the lips to drink), and the abstract, symbolic actions–signs used in American Sign Language. Our results show that these different classes of human actions engage a frontal/parietal/STS human action recognition system in a highly similar fashion. However, the results indicate that this neural consistency across motion classes is true primarily for hearing subjects. Data from deaf signers shows a non-uniform response to different classes of human actions. As expected, deaf signers engaged left-hemisphere perisylvian language areas during the perception of signed language signs. Surprisingly, these subjects did not engage the expected frontal/parietal/STS circuitry during passive viewing of non-linguistic actions, but rather reliably activated middle-occipital temporal-ventral regions which are known to participate in the detection of human bodies, faces, and movements. Comparisons with data from hearing subjects establish statistically significant contributions of middle-occipital temporal-ventral during the processing of non-linguistic actions in deaf signers. These results suggest that during human motion processing, deaf individuals may engage specialized neural systems that allow for rapid, online differentiation of meaningful linguistic actions from non-linguistic human movements. PMID:17459349
The perception of object versus objectless motion.
Hock, Howard S; Nichols, David F
2013-05-01
Wertheimer, M. (Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 61:161-265, 1912) classical distinction between beta (object) and phi (objectless) motion is elaborated here in a series of experiments concerning competition between two qualitatively different motion percepts, induced by sequential changes in luminance for two-dimensional geometric objects composed of rectangular surfaces. One of these percepts is of spreading-luminance motion that continuously sweeps across the entire object; it exhibits shape invariance and is perceived most strongly for fast speeds. Significantly for the characterization of phi as objectless motion, the spreading luminance does not involve surface boundaries or any other feature; the percept is driven solely by spatiotemporal changes in luminance. Alternatively, and for relatively slow speeds, a discrete series of edge motions can be perceived in the direction opposite to spreading-luminance motion. Akin to beta motion, the edges appear to move through intermediate positions within the object's changing surfaces. Significantly for the characterization of beta as object motion, edge motion exhibits shape dependence and is based on the detection of oppositely signed changes in contrast (i.e., counterchange) for features essential to the determination of an object's shape, the boundaries separating its surfaces. These results are consistent with area MT neurons that differ with respect to speed preference Newsome et al (Journal of Neurophysiology, 55:1340-1351, 1986) and shape dependence Zeki (Journal of Physiology, 236:549-573, 1974).
Unconscious Local Motion Alters Global Image Speed
Khuu, Sieu K.; Chung, Charles Y. L.; Lord, Stephanie; Pearson, Joel
2014-01-01
Accurate motion perception of self and object speed is crucial for successful interaction in the world. The context in which we make such speed judgments has a profound effect on their accuracy. Misperceptions of motion speed caused by the context can have drastic consequences in real world situations, but they also reveal much about the underlying mechanisms of motion perception. Here we show that motion signals suppressed from awareness can warp simultaneous conscious speed perception. In Experiment 1, we measured global speed discrimination thresholds using an annulus of 8 local Gabor elements. We show that physically removing local elements from the array attenuated global speed discrimination. However, removing awareness of the local elements only had a small effect on speed discrimination. That is, unconscious local motion elements contributed to global conscious speed perception. In Experiment 2 we measured the global speed of the moving Gabor patterns, when half the elements moved at different speeds. We show that global speed averaging occurred regardless of whether local elements were removed from awareness, such that the speed of invisible elements continued to be averaged together with the visible elements to determine the global speed. These data suggest that contextual motion signals outside of awareness can both boost and affect our experience of motion speed, and suggest that such pooling of motion signals occurs before the conscious extraction of the surround motion speed. PMID:25503603
The effect of occlusion therapy on motion perception deficits in amblyopia.
Giaschi, Deborah; Chapman, Christine; Meier, Kimberly; Narasimhan, Sathyasri; Regan, David
2015-09-01
There is growing evidence for deficits in motion perception in amblyopia, but these are rarely assessed clinically. In this prospective study we examined the effect of occlusion therapy on motion-defined form perception and multiple-object tracking. Participants included children (3-10years old) with unilateral anisometropic and/or strabismic amblyopia who were currently undergoing occlusion therapy and age-matched control children with normal vision. At the start of the study, deficits in motion-defined form perception were present in at least one eye in 69% of the children with amblyopia. These deficits were still present at the end of the study in 55% of the amblyopia group. For multiple-object tracking, deficits were present initially in 64% and finally in 55% of the children with amblyopia, even after completion of occlusion therapy. Many of these deficits persisted in spite of an improvement in amblyopic eye visual acuity in response to occlusion therapy. The prevalence of motion perception deficits in amblyopia as well as their resistance to occlusion therapy, support the need for new approaches to amblyopia treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Motion perception tasks as potential correlates to driving difficulty in the elderly
NASA Astrophysics Data System (ADS)
Raghuram, A.; Lakshminarayanan, V.
2006-09-01
Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.
Contrast effects on speed perception for linear and radial motion.
Champion, Rebecca A; Warren, Paul A
2017-11-01
Speed perception is vital for safe activity in the environment. However, considerable evidence suggests that perceived speed changes as a function of stimulus contrast, with some investigators suggesting that this might have meaningful real-world consequences (e.g. driving in fog). In the present study we investigate whether the neural effects of contrast on speed perception occur at the level of local or global motion processing. To do this we examine both speed discrimination thresholds and contrast-dependent speed perception for two global motion configurations that have matched local spatio-temporal structure. Specifically we compare linear and radial configurations, the latter of which arises very commonly due to self-movement. In experiment 1 the stimuli comprised circular grating patches. In experiment 2, to match stimuli even more closely, motion was presented in multiple local Gabor patches equidistant from central fixation. Each patch contained identical linear motion but the global configuration was either consistent with linear or radial motion. In both experiments 1 and 2, discrimination thresholds and contrast-induced speed biases were similar in linear and radial conditions. These results suggest that contrast-based speed effects occur only at the level of local motion processing, irrespective of global structure. This result is interpreted in the context of previous models of speed perception and evidence suggesting differences in perceived speed of locally matched linear and radial stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Brief report: altered horizontal binding of single dots to coherent motion in autism.
David, Nicole; Rose, Michael; Schneider, Till R; Vogeley, Kai; Engel, Andreas K
2010-12-01
Individuals with autism often show a fragmented way of perceiving their environment, suggesting a disorder of information integration, possibly due to disrupted communication between brain areas. We investigated thirteen individuals with high-functioning autism (HFA) and thirteen healthy controls using the metastable motion quartet, a stimulus consisting of two dots alternately presented at four locations of a hypothetical square, thereby inducing an apparent motion percept. This percept is vertical or horizontal, the latter requiring binding of motion signals across cerebral hemispheres. Decreasing the horizontal distance between dots could facilitate horizontal percepts. We found evidence for altered horizontal binding in HFA: Individuals with HFA needed stronger facilitation to experience horizontal motion. These data are interpreted in light of reduced cross-hemispheric communication.
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane
NASA Technical Reports Server (NTRS)
Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.
1989-01-01
An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.
Sharpening vision by adapting to flicker.
Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A
2016-11-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.
Sharpening vision by adapting to flicker
Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.
2016-01-01
Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115
Panichi, Roberto; Botti, Fabio Massimo; Ferraresi, Aldo; Faralli, Mario; Kyriakareli, Artemis; Schieppati, Marco; Pettorossi, Vito Enrico
2011-04-01
Self-motion perception and vestibulo-ocular reflex (VOR) were studied during whole body yaw rotation in the dark at different static head positions. Rotations consisted of four cycles of symmetric sinusoidal and asymmetric oscillations. Self-motion perception was evaluated by measuring the ability of subjects to manually track a static remembered target. VOR was recorded separately and the slow phase eye position (SPEP) was computed. Three different head static yaw deviations (active and passive) relative to the trunk (0°, 45° to right and 45° to left) were examined. Active head deviations had a significant effect during asymmetric oscillation: the movement perception was enhanced when the head was kept turned toward the side of body rotation and decreased in the opposite direction. Conversely, passive head deviations had no effect on movement perception. Further, vibration (100 Hz) of the neck muscles splenius capitis and sternocleidomastoideus remarkably influenced perceived rotation during asymmetric oscillation. On the other hand, SPEP of VOR was modulated by active head deviation, but was not influenced by neck muscle vibration. Through its effects on motion perception and reflex gain, head position improved gaze stability and enhanced self-motion perception in the direction of the head deviation. Copyright © 2010 Elsevier B.V. All rights reserved.
Stereomotion speed perception is contrast dependent
NASA Technical Reports Server (NTRS)
Brooks, K.
2001-01-01
The effect of contrast on the perception of stimulus speed for stereomotion and monocular lateral motion was investigated for successive matches in random-dot stimuli. The familiar 'Thompson effect'--that a reduction in contrast leads to a reduction in perceived speed--was found in similar proportions for both binocular images moving in depth, and for monocular images translating laterally. This result is consistent with the idea that the monocular motion system has a significant input to the stereomotion system, and dominates the speed percept for approaching motion.
Accounting for direction and speed of eye motion in planning visually guided manual tracking.
Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe
2013-10-01
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki
2009-02-01
Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
Residual perception of biological motion in cortical blindness.
Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto
2016-12-01
From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Role for MST Neurons in Heading Estimation
NASA Technical Reports Server (NTRS)
Stone, L. S.; Perrone, J. A.
1994-01-01
A template model of human visual self-motion perception, which uses neurophysiologically realistic "heading detectors", is consistent with numerous human psychophysical results including the failure of humans to estimate their heading (direction of forward translation) accurately under certain visual conditions. We tested the model detectors with stimuli used by others in single-unit studies. The detectors showed emergent properties similar to those of MST neurons: (1) Sensitivity to non-preferred flow; Each detector is tuned to a specific combination of flow components and its response is systematically reduced by the addition of nonpreferred flow, and (2) Position invariance; The detectors maintain their apparent preference for particular flow components over large regions of their receptive fields. It has been argued that this latter property is incompatible with MST playing a role in heading perception. The model however demonstrates how neurons with the above response properties could still support accurate heading estimation within extrastriate cortical maps.
Role of orientation reference selection in motion sickness, supplement 2S
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Black, F. Owen
1987-01-01
Previous experiments with moving platform posturography have shown that different people have varying abilities to resolve conflicts among vestibular, visual, and proprioceptive sensory signals. The conceptual basis of the present proposal hinges on the similarities between the space motion sickness problem and the sensory orientation reference selection problems associated with benign paroxysmal positional vertigo (BPPV) syndrome. These similarities include both etiology related to abnormal vertical canal-otolith function, and motion sickness initiating events provoked by pitch and roll head movements. The objectives are to explore and quantify the orientation reference selection abilities of subjects and the relation of this selection to motion sickness in humans. The overall objectives are to determine: if motion sickness susceptibility is related to sensory orientation reference selection abilities of subjects; if abnormal vertical canal-otolith function is the source of abnormal posture control strategies and if it can be quantified by vestibular and oculomotor reflex measurements, and if it can be quantified by vestibular and oculomotor reflex measurements; and quantifiable measures of perception of vestibular and visual motion cues can be related to motion sickness susceptibility and to orientation reference selection ability.
Slow motion increases perceived intent
Caruso, Eugene M.; Burns, Zachary C.; Converse, Benjamin A.
2016-01-01
To determine the appropriate punishment for a harmful action, people must often make inferences about the transgressor’s intent. In courtrooms and popular media, such inferences increasingly rely on video evidence, which is often played in “slow motion.” Four experiments (n = 1,610) involving real surveillance footage from a murder or broadcast replays of violent contact in professional football demonstrate that viewing an action in slow motion, compared with regular speed, can cause viewers to perceive an action as more intentional. This slow motion intentionality bias occurred, in part, because slow motion video caused participants to feel like the actor had more time to act, even when they knew how much clock time had actually elapsed. Four additional experiments (n = 2,737) reveal that allowing viewers to see both regular speed and slow motion replay mitigates the bias, but does not eliminate it. We conclude that an empirical understanding of the effect of slow motion on mental state attribution should inform the life-or-death decisions that are currently based on tacit assumptions about the objectivity of human perception. PMID:27482091
Tilt and Translation Motion Perception during Off Vertical Axis Rotation
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Reschke, Millard F.; Clement, Gilles
2006-01-01
The effect of stimulus frequency on tilt and translation motion perception was studied during constant velocity off-vertical axis rotation (OVAR), and compared to the effect of stimulus frequency on eye movements. Fourteen healthy subjects were rotated in darkness about their longitudinal axis 10deg and 20deg off-vertical at 0.125 Hz, and 20deg offvertical at 0.5 Hz. Oculomotor responses were recorded using videography, and perceived motion was evaluated using verbal reports and a joystick with four degrees of freedom (pitch and roll tilt, mediallateral and anteriorposterior translation). During the lower frequency OVAR, subjects reported the perception of progressing along the edge of a cone. During higher frequency OVAR, subjects reported the perception of progressing along the edge of an upright cylinder. The modulation of both tilt recorded from the joystick and ocular torsion significantly increased as the tilt angle increased from 10deg to 20deg at 0.125 Hz, and then decreased at 0.5 Hz. Both tilt perception and torsion slightly lagged head orientation at 0.125 Hz. The phase lag of torsion increased at 0.5 Hz, while the phase of tilt perception did not change as a function of frequency. The amplitude of both translation perception recorded from the joystick and horizontal eye movements was negligible at 0.125 Hz and increased as a function of stimulus frequency. While the phase lead of horizontal eye movements decreased at 0.5 Hz, the phase of translation perception did not vary with stimulus frequency and was similar to the phase of tilt perception during all conditions. During dynamic linear acceleration in the absence of other sensory input (canal, vision) a change in stimulus frequency alone elicits similar changes in the amplitude of both self motion perception and eye movements. However, in contrast to the eye movements, the phase of both perceived tilt and translation motion is not altered by stimulus frequency. We conclude that the neural processing to distinguish tilt and translation linear acceleration stimuli differs between eye movements and motion perception.
Discrimination of curvature from motion during smooth pursuit eye movements and fixation.
Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R
2017-09-01
Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.
Thurman, Steven M; Lu, Hongjing
2014-01-01
Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.
When eyes drive hand: Influence of non-biological motion on visuo-motor coupling.
Thoret, Etienne; Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard
2016-01-26
Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
The Perception of Biological and Mechanical Motion in Female Fragile X Premutation Carriers
ERIC Educational Resources Information Center
Keri, Szabolcs; Benedek, Gyorgy
2010-01-01
Previous studies reported impaired visual information processing in patients with fragile x syndrome and in premutation carriers. In this study, we assessed the perception of biological motion (a walking point-light character) and mechanical motion (a rotating shape) in 25 female fragile x premutation carriers and in 20 healthy non-carrier…
Comparison of Flight Simulators Based on Human Motion Perception Metrics
NASA Technical Reports Server (NTRS)
Valente Pais, Ana R.; Correia Gracio, Bruno J.; Kelly, Lon C.; Houck, Jacob A.
2015-01-01
In flight simulation, motion filters are used to transform aircraft motion into simulator motion. When looking for the best match between visual and inertial amplitude in a simulator, researchers have found that there is a range of inertial amplitudes, rather than a single inertial value, that is perceived by subjects as optimal. This zone, hereafter referred to as the optimal zone, seems to correlate to the perceptual coherence zones measured in flight simulators. However, no studies were found in which these two zones were compared. This study investigates the relation between the optimal and the coherence zone measurements within and between different simulators. Results show that for the sway axis, the optimal zone lies within the lower part of the coherence zone. In addition, it was found that, whereas the width of the coherence zone depends on the visual amplitude and frequency, the width of the optimal zone remains constant.
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Selectivity to Translational Egomotion in Human Brain Motion Areas
Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare
2013-01-01
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096
Stimulus factors in motion perception and spatial orientation
NASA Technical Reports Server (NTRS)
Post, R. B.; Johnson, C. A.
1984-01-01
The Malcolm horizon utilizes a large projected light stimulus Peripheral Vision Horizon Device (PVHD) as an attitude indicator in order to achieve a more compelling sense of roll than is obtained with smaller devices. The basic principle is that the larger stimulus is more similar to visibility of a real horizon during roll, and does not require fixation and attention to the degree that smaller displays do. Successful implementation of such a device requires adjustment of the parameters of the visual stimulus so that its effects on motion perception and spatial orientation are optimized. With this purpose in mind, the effects of relevant image variables on the perception of object motion, self motion and spatial orientation are reviewed.
NASA Technical Reports Server (NTRS)
Parker, D. E.; Reschke, M. F.; Von Gierke, H. E.; Lessard, C. S.
1987-01-01
The preflight adaptation trainer (PAT) was designed to produce rearranged relationships between visual and otolith signals analogous to those experienced in space. Investigations have been undertaken with three prototype trainers. The results indicated that exposure to the PAT sensory rearrangement altered self-motion perception, induced motion sickness, and changed the amplitude and phase of the horizontal eye movements evoked by roll stimulation. However, the changes were inconsistent.
Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds
Wright, W. Geoffrey
2014-01-01
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed. PMID:24782724
Anthropomorphism influences perception of computer-animated characters’ actions
Hodgins, Jessica; Kawato, Mitsuo
2007-01-01
Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters’ influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards ‘biological’, derived from the Signal Detection Theory, decreases with characters’ anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes. PMID:18985142
Modeling a space-variant cortical representation for apparent motion.
Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash
2013-08-06
Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.
Hu, Bin; Yue, Shigang; Zhang, Zhuhong
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Effect of contrast on human speed perception
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments.
Type of featural attention differentially modulates hMT+ responses to illusory motion aftereffects.
Castelo-Branco, Miguel; Kozak, Lajos R; Formisano, Elia; Teixeira, João; Xavier, João; Goebel, Rainer
2009-11-01
Activity in the human motion complex (hMT(+)/V5) is related to the perception of motion, be it either real surface motion or an illusion of motion such as apparent motion (AM) or motion aftereffect (MAE). It is a long-lasting debate whether illusory motion-related activations in hMT(+) represent the motion itself or attention to it. We have asked whether hMT(+) responses to MAEs are present when shifts in arousal are suppressed and attention is focused on concurrent motion versus nonmotion features. Significant enhancement of hMT(+) activity was observed during MAEs when attention was focused either on concurrent spatial angle or color features. This observation was confirmed by direct comparison of adapting (MAE inducing) versus nonadapting conditions. In contrast, this effect was diminished when subjects had to report on concomitant speed changes of superimposed AM. The same finding was observed for concomitant orthogonal real motion (RM), suggesting that selective attention to concurrent illusory or real motion was interfering with the saliency of MAE signals in hMT(+). We conclude that MAE-related changes in the global activity of hMT(+) are present provided selective attention is not focused on an interfering feature such as concurrent motion. Accordingly, there is a genuine MAE-related motion signal in hMT(+) that is neither explained by shifts in arousal nor by selective attention.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, Scott J.
2009-01-01
Previous studies have demonstrated an effect of frequency on the gain of tilt and translation perception. Results from different motion paradigms are often combined to extend the stimulus frequency range. For example, Off-Vertical Axis Rotation (OVAR) and Variable Radius Centrifugation (VRC) are useful to test low frequencies of linear acceleration at amplitudes that would require impractical sled lengths. The purpose of this study was to compare roll-tilt and lateral translation motion perception in 12 healthy subjects across four paradigms: OVAR, VRC, sled translation and rotation about an earth-horizontal axis. Subjects were oscillated in darkness at six frequencies from 0.01875 to 0.6 Hz (peak acceleration equivalent to 10 deg, less for sled motion below 0.15 Hz). Subjects verbally described the amplitude of perceived tilt and translation, and used a joystick to indicate the direction of motion. Consistent with previous reports, tilt perception gain decreased as a function of stimulus frequency in the motion paradigms without concordant canal tilt cues (OVAR, VRC and Sled). Translation perception gain was negligible at low stimulus frequencies and increased at higher frequencies. There were no significant differences between the phase of tilt and translation, nor did the phase significantly vary across stimulus frequency. There were differences in perception gain across the different paradigms. Paradigms that included actual tilt stimuli had the larger tilt gains, and paradigms that included actual translation stimuli had larger translation gains. In addition, the frequency at which there was a crossover of tilt and translation gains appeared to vary across motion paradigm between 0.15 and 0.3 Hz. Since the linear acceleration in the head lateral plane was equivalent across paradigms, differences in gain may be attributable to the presence of linear accelerations in orthogonal directions and/or cognitive aspects based on the expected motion paths.
Tanaka, Yoshiyuki; Mizoe, Genki; Kawaguchi, Tomohiro
2015-01-01
This paper proposes a simple diagnostic methodology for checking the ability of proprioceptive/kinesthetic sensation by using a robotic device. The perception ability of virtual frictional forces is examined in operations of the robotic device by the hand at a uniform slow velocity along the virtual straight/circular path. Experimental results by healthy subjects demonstrate that percentage of correct answers for the designed perceptual tests changes in the motion direction as well as the arm configuration and the HFM (human force manipulability) measure. It can be supposed that the proposed methodology can be applied into the early detection of neuromuscular/neurological disorders.
Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review
Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi
2015-01-01
Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827
ERIC Educational Resources Information Center
Lindemann, Oliver; Bekkering, Harold
2009-01-01
In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction. Action execution had to be delayed until the…
Altered perception of apparent motion in schizophrenia spectrum disorder.
Tschacher, Wolfgang; Dubouloz, Priscilla; Meier, Rahel; Junghan, Uli
2008-06-30
Apparent motion (AM), the Gestalt perception of motion in the absence of physical motion, was used to study perceptual organization and neurocognitive binding in schizophrenia. Associations between AM perception and psychopathology as well as meaningful subgroups were sought. Circular and stroboscopic AM stimuli were presented to 68 schizophrenia spectrum patients and healthy participants. Psychopathology was measured using the Positive and Negative Syndrome Scale (PANSS). Psychopathology was related to AM perception differentially: Positive and disorganization symptoms were linked to reduced gestalt stability; negative symptoms, excitement and depression had opposite regression weights. Dimensions of psychopathology thus have opposing effects on gestalt perception. It was generally found that AM perception was closely associated with psychopathology. No difference existed between patients and controls, but two latent classes were found. Class A members who had low levels of AM stability made up the majority of inpatients and control subjects; such participants were generally young and male, with short reaction times. Class B typically contained outpatients and some control subjects; participants in class B were older and showed longer reaction times. Hence AM perceptual dysfunctions are not specific for schizophrenia, yet AM may be a promising stage marker.
NASA Technical Reports Server (NTRS)
Bishu, Ram R.; Bronkema, Lisa
1993-01-01
Human capabilities such as dexterity, manipulability, and tactile perception are unique and render the hands as a very versatile, effective and a multipurpose tool. This is especially true for environments such as the EVA environment. However, with the use of the protective EVA gloves, there is much evidence to suggest that human performance decreases. In order to determine the nature and cause of this performance decrement, several performance tests were run which studied the effects of gloves on strength, tactile feedback, and range of motion. Tactile sensitivity was measured as a function of grip strength, and the results are discussed. Equipment which was developed to measure finger range of motion along with corresponding finger strength values is discussed. The results of these studies have useful implications for improved glove design.
Vestibular signals in primate cortex for self-motion perception.
Gu, Yong
2018-04-21
The vestibular peripheral organs in our inner ears detect transient motion of the head in everyday life. This information is sent to the central nervous system for automatic processes such as vestibulo-ocular reflexes, balance and postural control, and higher cognitive functions including perception of self-motion and spatial orientation. Recent neurophysiological studies have discovered a prominent vestibular network in the primate cerebral cortex. Many of the areas involved are multisensory: their neurons are modulated by both vestibular signals and visual optic flow, potentially facilitating more robust heading estimation through cue integration. Combining psychophysics, computation, physiological recording and causal manipulation techniques, recent work has addressed both the encoding and decoding of vestibular signals for self-motion perception. Copyright © 2018. Published by Elsevier Ltd.
Human dynamic orientation model applied to motion simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Borah, J. D.
1976-01-01
The Ormsby model of dynamic orientation, in the form of a discrete time computer program was used to predict non-visually induced sensations during an idealized coordinated aircraft turn. To predict simulation fidelity, the Ormsby model was used to assign penalties for incorrect attitude and angular rate perceptions. It was determined that a three rotational degree of freedom simulation should remain faithful to attitude perception even at the expense of incorrect angular rate sensations. Implementing this strategy, a simulation profile for the idealized turn was designed for a Link GAT-1 trainer. A simple optokinetic display was added to improve the fidelity of roll rate sensations.
At the Limit: Introducing Energy with Human Senses
NASA Astrophysics Data System (ADS)
Stinken, Lisa; Heusler, Stefan; Carmesin, Hans-Otto
2016-12-01
Energy belongs to the core ideas of the physics curriculum. But at the same time, energy is one of the most complex topics in science education since it occurs in multiple ways, such as motion, sound, light, and thermal energy. It can neither be destroyed nor created, but only converted. Due to the variety of relevant scales and abstractness of the term energy, the question arises how to introduce energy at the introductory physics level. The aim of this article is to demonstrate how the concept of energy can become meaningful in the context of the human senses. Three simple experiments to investigate the minimal amount of energy that is required to generate a sensory perception are presented. In this way students can learn that even different sensory perceptions can be compared by using energy as the unifying concept.
NASA Technical Reports Server (NTRS)
Hu, Senqi; Grant, Wanda F.; Stern, Robert M.; Koch, Kenneth L.
1991-01-01
Fifty-two subjects were exposed to a rotating optokinetic drum. Ten of these subjects who became motion sick during the first session completed two additional sessions. Subjects' symptoms of motion sickness, perception of self-motion, electrogastrograms (EGGs), heart rate, mean successive differences of R-R intervals (RRI), and skin conductance were recorded for each session. The results from the first session indicated that the development of motion sickness was accompanied by increased EGG 4-9 cpm activity (gastric tachyarrhythmia), decreased mean succesive differences of RRI, increased skin conductance levels, and increased self-motion perception. The results from the subjects who had three repeated sessions showed that 4-9 cpm EGG activity, skin conductance levels, perception of self-motion, and symptoms of motion sickness all increased significantly during the drum rotation period of the first session, but increased significantly less during the following sessions. Mean successive differences of RRI decreased significantly during the drum rotation period for the first session, but decreased significantly less during the following sessions. Results show that the development of motion sickness is accompanied by an increase in gastric tachyarrhythmia, and an increase in sympathetic activity and a decrease in parasympathetic activity, and that adaptation to motion sickness is accompanied by the recovery of autonomic nervous system balance.
Tuning self-motion perception in virtual reality with visual illusions.
Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus
2012-07-01
Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.
Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert
2010-10-13
The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing.
Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation
NASA Technical Reports Server (NTRS)
Peterka, Robert J.
1994-01-01
The objective of this proposal is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness. Results of experimentation are summarized and modifications to a two-axis rotation device are described. Abstracts of a number of papers generated during the reporting period are appended.
Sex differences in the development of brain mechanisms for processing biological motion.
Anderson, L C; Bolling, D Z; Schelinski, S; Coffman, M C; Pelphrey, K A; Kaiser, M D
2013-12-01
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. © 2013 Elsevier Inc. All rights reserved.
Sex Differences in the Development of Brain Mechanisms for Processing Biological Motion
Anderson, L.C.; Bolling, D.Z.; Schelinski, S.; Coffman, M.C.; Pelphrey, K.A.; Kaiser, M.D.
2013-01-01
Disorders related to social functioning including autism and schizophrenia differ drastically in incidence and severity between males and females. Little is known about the neural systems underlying these sex-linked differences in risk and resiliency. Using functional magnetic resonance imaging and a task involving the visual perception of point-light displays of coherent and scrambled biological motion, we discovered sex differences in the development of neural systems for basic social perception. In adults, we identified enhanced activity during coherent biological motion perception in females relative to males in a network of brain regions previously implicated in social perception including amygdala, medial temporal gyrus, and temporal pole. These sex differences were less pronounced in our sample of school-age youth. We hypothesize that the robust neural circuitry supporting social perception in females, which diverges from males beginning in childhood, may underlie sex differences in disorders related to social processing. PMID:23876243
Perception of Elasticity in the Kinetic Illusory Object with Phase Differences in Inducer Motion
Masuda, Tomohiro; Sato, Kazuki; Murakoshi, Takuma; Utsumi, Ken; Kimura, Atsushi; Shirai, Nobu; Kanazawa, So; Yamaguchi, Masami K.; Wada, Yuji
2013-01-01
Background It is known that subjective contours are perceived even when a figure involves motion. However, whether this includes the perception of rigidity or deformation of an illusory surface remains unknown. In particular, since most visual stimuli used in previous studies were generated in order to induce illusory rigid objects, the potential perception of material properties such as rigidity or elasticity in these illusory surfaces has not been examined. Here, we elucidate whether the magnitude of phase difference in oscillation influences the visual impressions of an object's elasticity (Experiment 1) and identify whether such elasticity perceptions are accompanied by the shape of the subjective contours, which can be assumed to be strongly correlated with the perception of rigidity (Experiment 2). Methodology/Principal Findings In Experiment 1, the phase differences in the oscillating motion of inducers were controlled to investigate whether they influenced the visual impression of an illusory object's elasticity. The results demonstrated that the impression of the elasticity of an illusory surface with subjective contours was systematically flipped with the degree of phase difference. In Experiment 2, we examined whether the subjective contours of a perceived object appeared linear or curved using multi-dimensional scaling analysis. The results indicated that the contours of a moving illusory object were perceived as more curved than linear in all phase-difference conditions. Conclusions/Significance These findings suggest that the phase difference in an object's motion is a significant factor in the material perception of motion-related elasticity. PMID:24205281
Allenmark, Fredrik; Read, Jenny C A
2012-10-10
Neurons in cortical area MT respond well to transparent streaming motion in distinct depth planes, such as caused by observer self-motion, but do not contain subregions excited by opposite directions of motion. We therefore predicted that spatial resolution for transparent motion/disparity conjunctions would be limited by the size of MT receptive fields, just as spatial resolution for disparity is limited by the much smaller receptive fields found in primary visual cortex, V1. We measured this using a novel "joint motion/disparity grating," on which human observers detected motion/disparity conjunctions in transparent random-dot patterns containing dots streaming in opposite directions on two depth planes. Surprisingly, observers showed the same spatial resolution for these as for pure disparity gratings. We estimate the limiting receptive field diameter at 11 arcmin, similar to V1 and much smaller than MT. Higher internal noise for detecting joint motion/disparity produces a slightly lower high-frequency cutoff of 2.5 cycles per degree (cpd) versus 3.3 cpd for disparity. This suggests that information on motion/disparity conjunctions is available in the population activity of V1 and that this information can be decoded for perception even when it is invisible to neurons in MT.
Temporal dynamics of 2D motion integration for ocular following in macaque monkeys.
Barthélemy, Fréderic V; Fleuriet, Jérome; Masson, Guillaume S
2010-03-01
Several recent studies have shown that extracting pattern motion direction is a dynamical process where edge motion is first extracted and pattern-related information is encoded with a small time lag by MT neurons. A similar dynamics was found for human reflexive or voluntary tracking. Here, we bring an essential, but still missing, piece of information by documenting macaque ocular following responses to gratings, unikinetic plaids, and barber-poles. We found that ocular tracking was always initiated first in the grating motion direction with ultra-short latencies (approximately 55 ms). A second component was driven only 10-15 ms later, rotating tracking toward pattern motion direction. At the end the open-loop period, tracking direction was aligned with pattern motion direction (plaids) or the average of the line-ending motion directions (barber-poles). We characterized the dependency on contrast of each component. Both timing and direction of ocular following were quantitatively very consistent with the dynamics of neuronal responses reported by others. Overall, we found a remarkable consistency between neuronal dynamics and monkey behavior, advocating for a direct link between the neuronal solution of the aperture problem and primate perception and action.
NASA Technical Reports Server (NTRS)
Clement, Gilles; Wood, Scott J.
2010-01-01
This joint ESA-NASA study is examining changes in motion perception following Space Shuttle flights and the operational implications of post-flight tilt-translation ambiguity for manual control performance. Vibrotactile feedback of tilt orientation is also being evaluated as a countermeasure to improve performance during a closed-loop nulling task. METHODS. Data has been collected on 5 astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation (216 deg/s) combined with body translation (12-22 cm, peak-to-peak) is utilized to elicit roll-tilt perception (equivalent to 20 deg, peak-to-peak). A forward-backward moving sled (24-390 cm, peak-to-peak) with or without chair tilting in pitch is utilized to elicit pitch tilt perception (equivalent to 20 deg, peak-to-peak). These combinations are elicited at 0.15, 0.3, and 0.6 Hz for evaluating the effect of motion frequency on tilt-translation ambiguity. In both devices, a closed-loop nulling task is also performed during pseudorandom motion with and without vibrotactile feedback of tilt. All tests are performed in complete darkness. PRELIMINARY RESULTS. Data collection is currently ongoing. Results to date suggest there is a trend for translation motion perception to be increased at the low and medium frequencies on landing day compared to pre-flight. Manual control performance is improved with vibrotactile feedback. DISCUSSION. The results of this study indicate that post-flight recovery of motion perception and manual control performance is complete within 8 days following short-duration space missions. Vibrotactile feedback of tilt improves manual control performance both before and after flight.
Advanced Prosthetic Gait Training Tool
2015-12-01
motion capture sequences was provided by MPL to CCAD and OGAL. CCAD’s work focused on imposing these sequences on the SantosTM digital human avatar ...manipulating the avatar image. These manipulations are accomplished in the context of reinforcing what is the more ideal position and relating...focus on the visual environment by asking users to manipulate a static image of the Santos avatar to represent their perception of what they observe
Perception of Motion in Statistically-Defined Displays.
1988-02-15
motion encoding (Reichardt, 1961; Barlow and Levick , 1963; van Doorn and Koenderink, 1982a, b ; van de Grind, Koenderink, van Doorn, 1983). A bilocal...stimelu toetini motion onet Lca perception. Psychological Review, 87, 435-469.bo. Barlow H. B . and Levick W. R. (1963) The mechanisms of directionally...REPORT NUMBER(S) 5. MONITORING ORGANIZATIOLA OAT yOSl R 6. NAME OF PERFORMING ORGANIZATION b . OFFICE SYMBOL 7@. NAME OF MONITORING ORGANIZATION (If
Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery
ERIC Educational Resources Information Center
Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy
2011-01-01
A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…
Perception of Motion in Statistically-Defined Displays
1989-04-15
psychophysical study before. He was paid $7.50/hour for his participation. Also, to insure high motivation , he received an additional one cent for every...correct response. This was the same motivational device used in the earlier work on motion discrimination (Ball and Sekuler, 1982). The observer...scientists, physiologists, and people interested in computer vision. Finally, one of the main motives for studying motion perception is a desire to
Perceptual Bias and Loudness Change: An Investigation of Memory, Masking, and Psychophysiology
NASA Astrophysics Data System (ADS)
Olsen, Kirk N.
Loudness is a fundamental aspect of human auditory perception that is closely associated with a sound's physical acoustic intensity. The dynamic quality of intensity change is an inherent acoustic feature in real-world listening domains such as speech and music. However, perception of loudness change in response to continuous intensity increases (up-ramps) and decreases (down-ramps) has received relatively little empirical investigation. Overestimation of loudness change in response to up-ramps is said to be linked to an adaptive survival response associated with looming (or approaching) motion in the environment. The hypothesised 'perceptual bias' to looming auditory motion suggests why perceptual overestimation of up-ramps may occur; however it does not offer a causal explanation. It is concluded that post-stimulus judgements of perceived loudness change are significantly affected by a cognitive recency response bias that, until now, has been an artefact of experimental procedure. Perceptual end-level differences caused by duration specific sensory adaptation at peripheral and/or central stages of auditory processing may explain differences in post-stimulus judgements of loudness change. Experiments that investigate human responses to acoustic intensity dynamics, encompassing topics from basic auditory psychophysics (e.g., sensory adaptation) to cognitive-emotional appraisal of increasingly complex stimulus events such as music and auditory warnings, are proposed for future research.
A closed-loop neurobotic system for fine touch sensing
NASA Astrophysics Data System (ADS)
Bologna, L. L.; Pinoteau, J.; Passot, J.-B.; Garrido, J. A.; Vogel, J.; Ros Vidal, E.; Arleo, A.
2013-08-01
Objective. Fine touch sensing relies on peripheral-to-central neurotransmission of somesthetic percepts, as well as on active motion policies shaping tactile exploration. This paper presents a novel neuroengineering framework for robotic applications based on the multistage processing of fine tactile information in the closed action-perception loop. Approach. The integrated system modules focus on (i) neural coding principles of spatiotemporal spiking patterns at the periphery of the somatosensory pathway, (ii) probabilistic decoding mechanisms mediating cortical-like tactile recognition and (iii) decision-making and low-level motor adaptation underlying active touch sensing. We probed the resulting neural architecture through a Braille reading task. Main results. Our results on the peripheral encoding of primary contact features are consistent with experimental data on human slow-adapting type I mechanoreceptors. They also suggest second-order processing by cuneate neurons may resolve perceptual ambiguities, contributing to a fast and highly performing online discrimination of Braille inputs by a downstream probabilistic decoder. The implemented multilevel adaptive control provides robustness to motion inaccuracy, while making the number of finger accelerations covariate with Braille character complexity. The resulting modulation of fingertip kinematics is coherent with that observed in human Braille readers. Significance. This work provides a basis for the design and implementation of modular neuromimetic systems for fine touch discrimination in robotics.
A neural model of visual figure-ground segregation from kinetic occlusion.
Barnes, Timothy; Mingolla, Ennio
2013-01-01
Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segmentation between identically textured surfaces--percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region appears to be situated farther away and behind the other (i.e., the ground) if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e., the figure) if the boundary is moving coherently with the moving texture. A computational model of visual areas V1 and V2 shows how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal kinetic occlusion at that boundary. Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. A weak speed-depth bias brings faster-moving texture regions forward in depth in the absence of occlusion (shearing motion). These processes together reproduce human psychophysical reports of depth ordering for key cases of kinetic occlusion displays. Copyright © 2012 Elsevier Ltd. All rights reserved.
Atypical activation of the mirror neuron system during perception of hand motion in autism.
Martineau, Joëlle; Andersson, Frédéric; Barthélémy, Catherine; Cottier, Jean-Philippe; Destrieux, Christophe
2010-03-12
Disorders in the autism spectrum are characterized by deficits in social and communication skills such as imitation, pragmatic language, theory of mind, and empathy. The discovery of the "mirror neuron system" (MNS) in macaque monkeys may provide a basis from which to explain some of the behavioral dysfunctions seen in individuals with autism spectrum disorders (ASD).We studied seven right-handed high-functioning male autistic and eight normal subjects (TD group) using functional magnetic resonance imaging during observation and execution of hand movements compared to a control condition (rest). The between group comparison of the contrast [observation versus rest] provided evidence of a bilateral greater activation of inferior frontal gyrus during observation of human motion than during rest for the ASD group than for the TD group. This hyperactivation of the pars opercularis (belonging to the MNS) during observation of human motion in autistic subjects provides strong support for the hypothesis of atypical activity of the MNS that may be at the core of the social deficits in autism. Copyright 2010 Elsevier B.V. All rights reserved.
Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.
2016-01-01
During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908
Panichi, R; Faralli, M; Bruni, R; Kiriakarely, A; Occhigrossi, C; Ferraresi, A; Bronstein, A M; Pettorossi, V E
2017-11-01
Self-motion perception was studied in patients with unilateral vestibular lesions (UVL) due to acute vestibular neuritis at 1 wk and 4, 8, and 12 mo after the acute episode. We assessed vestibularly mediated self-motion perception by measuring the error in reproducing the position of a remembered visual target at the end of four cycles of asymmetric whole-body rotation. The oscillatory stimulus consists of a slow (0.09 Hz) and a fast (0.38 Hz) half cycle. A large error was present in UVL patients when the slow half cycle was delivered toward the lesion side, but minimal toward the healthy side. This asymmetry diminished over time, but it remained abnormally large at 12 mo. In contrast, vestibulo-ocular reflex responses showed a large direction-dependent error only initially, then they normalized. Normalization also occurred for conventional reflex vestibular measures (caloric tests, subjective visual vertical, and head shaking nystagmus) and for perceptual function during symmetric rotation. Vestibular-related handicap, measured with the Dizziness Handicap Inventory (DHI) at 12 mo correlated with self-motion perception asymmetry but not with abnormalities in vestibulo-ocular function. We conclude that 1 ) a persistent self-motion perceptual bias is revealed by asymmetric rotation in UVLs despite vestibulo-ocular function becoming symmetric over time, 2 ) this dissociation is caused by differential perceptual-reflex adaptation to high- and low-frequency rotations when these are combined as with our asymmetric stimulus, 3 ) the findings imply differential central compensation for vestibuloperceptual and vestibulo-ocular reflex functions, and 4 ) self-motion perception disruption may mediate long-term vestibular-related handicap in UVL patients. NEW & NOTEWORTHY A novel vestibular stimulus, combining asymmetric slow and fast sinusoidal half cycles, revealed persistent vestibuloperceptual dysfunction in unilateral vestibular lesion (UVL) patients. The compensation of motion perception after UVL was slower than that of vestibulo-ocular reflex. Perceptual but not vestibulo-ocular reflex deficits correlated with dizziness-related handicap. Copyright © 2017 the American Physiological Society.
Global motion perception deficits in autism are reflected as early as primary visual cortex
Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.
2014-01-01
Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095
Neural dynamics of motion perception: direction fields, apertures, and resonant grouping.
Grossberg, S; Mingolla, E
1993-03-01
A neural network model of global motion segmentation by visual cortex is described. Called the motion boundary contour system (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyze how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The motion BCS describes how preprocessing of motion signals by a motion oriented contrast (MOC) filter is joined to long-range cooperative grouping mechanisms in a motion cooperative-competitive (MOCC) loop to control phenomena such as motion capture. The motion BCS is computed in parallel with the static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the motion BCS and the static BCS, specialized to process motion directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1-->MT and V1-->V2-->MT are made--notably, the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions of contrast. Interactions of model simple cells, complex cells, hyper-complex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.
Coordinates of Human Visual and Inertial Heading Perception.
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Coordinates of Human Visual and Inertial Heading Perception
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865
Sliding Mode Control of Real-Time PNU Vehicle Driving Simulator and Its Performance Evaluation
NASA Astrophysics Data System (ADS)
Lee, Min Cheol; Park, Min Kyu; Yoo, Wan Suk; Son, Kwon; Han, Myung Chul
This paper introduces an economical and effective full-scale driving simulator for study of human sensibility and development of new vehicle parts and its control. Real-time robust control to accurately reappear a various vehicle motion may be a difficult task because the motion platform is the nonlinear complex system. This study proposes the sliding mode controller with a perturbation compensator using observer-based fuzzy adaptive network (FAN). This control algorithm is designed to solve the chattering problem of a sliding mode control and to select the adequate fuzzy parameters of the perturbation compensator. For evaluating the trajectory control performance of the proposed approach, a tracking control of the developed simulator named PNUVDS is experimentally carried out. And then, the driving performance of the simulator is evaluated by using human perception and sensibility of some drivers in various driving conditions.
Wright, Kristyn; Kelley, Elizabeth; Poulin-Dubois, Diane
2014-01-01
Research investigating biological motion perception in children with ASD has revealed conflicting findings concerning whether impairments in biological motion perception exist. The current study investigated how children with high-functioning ASD (HF-ASD) performed on two tasks of biological motion identification: a novel schematic motion identification task and a point-light biological motion identification task. Twenty-two HFASD children were matched with 21 TD children on gender, non-verbal mental, and chronological, age (M years = 6.72). On both tasks, HF-ASD children performed with similar accuracy as TD children. Across groups, children performed better on animate than on inanimate trials of both tasks. These findings suggest that HF-ASD children's identification of both realistic and schematic biological motion identification is unimpaired. PMID:25395988
Motion direction discrimination training reduces perceived motion repulsion.
Jia, Ke; Li, Sheng
2017-04-01
Participants often exaggerate the perceived angular separation between two simultaneously presented motion stimuli, which is referred to as motion repulsion. The overestimation helps participants differentiate between the two superimposed motion directions, yet it causes the impairment of direction perception. Since direction perception can be refined through perceptual training, we here attempted to investigate whether the training of a direction discrimination task changes the amount of motion repulsion. Our results showed a direction-specific learning effect, which was accompanied by a reduced amount of motion repulsion both for the trained and the untrained directions. The reduction of the motion repulsion disappeared when the participants were trained on a luminance discrimination task (control experiment 1) or a speed discrimination task (control experiment 2), ruling out any possible interpretation in terms of adaptation or training-induced attentional bias. Furthermore, training with a direction discrimination task along a direction 150° away from both directions in the transparent stimulus (control experiment 3) also had little effect on the amount of motion repulsion, ruling out the contribution of task learning. The changed motion repulsion observed in the main experiment was consistent with the prediction of the recurrent model of perceptual learning. Therefore, our findings demonstrate that training in direction discrimination can benefit the precise direction perception of the transparent stimulus and provide new evidence for the recurrent model of perceptual learning.
Ott, Florian; Pohl, Ladina; Halfmann, Marc; Hardiess, Gregor; Mallot, Hanspeter A
2016-07-01
When estimating ego-motion in environments (e.g., tunnels, streets) with varying depth, human subjects confuse ego-acceleration with environment narrowing and ego-deceleration with environment widening. Festl, Recktenwald, Yuan, and Mallot (2012) demonstrated that in nonstereoscopic viewing conditions, this happens despite the fact that retinal measurements of acceleration rate-a variable related to tau-dot-should allow veridical perception. Here we address the question of whether additional depth cues (specifically binocular stereo, object occlusion, or constant average object size) help break the confusion between narrowing and acceleration. Using a forced-choice paradigm, the confusion is shown to persist even if unambiguous stereo information is provided. The confusion can also be demonstrated in an adjustment task in which subjects were asked to keep a constant speed in a tunnel with varying diameter: Subjects increased speed in widening sections and decreased speed in narrowing sections even though stereoscopic depth information was provided. If object-based depth information (stereo, occlusion, constant average object size) is added, the confusion between narrowing and acceleration still remains but may be slightly reduced. All experiments are consistent with a simple matched filter algorithm for ego-motion detection, neglecting both parallactic and stereoscopic depth information, but leave open the possibility of cue combination at a later stage.
Schütz, Alexander C.; Braun, Doris I.; Movshon, J. Anthony; Gegenfurtner, Karl R.
2011-01-01
We investigated how the human visual system and the pursuit system react to visual motion noise. We presented three different types of random-dot kinematograms at five different coherence levels. For transparent motion, the signal and noise labels on each dot were preserved throughout each trial, and noise dots moved with the same speed as the signal dots but in fixed random directions. For white noise motion, every 20 ms the signal and noise labels were randomly assigned to each dot and noise dots appeared at random positions. For Brownian motion, signal and noise labels were also randomly assigned, but the noise dots moved at the signal speed in a direction that varied randomly from moment to moment. Neither pursuit latency nor early eye acceleration differed among the different types of kinematograms. Late acceleration, pursuit gain, and perceived speed all depended on kinematogram type, with good agreement between pursuit gain and perceived speed. For transparent motion, pursuit gain and perceived speed were independent of coherence level. For white and Brownian motions, pursuit gain and perceived speed increased with coherence but were higher for white than for Brownian motion. This suggests that under our conditions, the pursuit system integrates across all directions of motion but not across all speeds. PMID:21149307
A Study on Analysis of EEG Caused by Grating Stimulation Imaging
NASA Astrophysics Data System (ADS)
Urakawa, Hiroshi; Nishimura, Toshihiro; Tsubai, Masayoshi; Itoh, Kenji
Recently, many researchers have studied a visual perception. Focus is attended to studies of the visual perception phenomenon by using the grating stimulation images. The previous researches have suggested that a subset of retinal ganglion cells responds to motion in the receptive field center, but only if the wider surround moves with a different trajectory. We discuss the function of human retina, and measure and analysis EEG(electroencephalography) of a normal subject who looks on grating stimulation images. We confirmed the visual perception of human by EEG signal analysis. We also have obtained that a sinusoidal grating stimulation was given, asymmetry was observed the α wave element in EEG of the symmetric part in a left hemisphere and a right hemisphere of the brain. Therefore, it is presumed that projected image is even when the still picture is seen and the image projected onto retinas of right and left eyes is not even for the dynamic scene. It evaluated it by taking the envelope curve for the detected α wave, and using the average and standard deviation.
A GPU-accelerated cortical neural network model for visually guided robot navigation.
Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L
2015-12-01
Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dynamic Spatial Hearing by Human and Robot Listeners
NASA Astrophysics Data System (ADS)
Zhong, Xuan
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1991-01-01
The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.
Visually guided control of movement in the context of multimodal stimulation
NASA Technical Reports Server (NTRS)
Riccio, Gary E.
1991-01-01
Flight simulation has been almost exclusively concerned with simulating the motions of the aircraft. Physically distinct subsystems are often combined to simulate the varieties of aircraft motion. Visual display systems simulate the motion of the aircraft relative to remote objects and surfaces (e.g., other aircraft and the terrain). 'Motion platform' simulators recreate aircraft motion relative to the gravitoinertial vector (i.e., correlated rotation and tilt as opposed to the 'coordinated turn' in flight). 'Control loaders' attempt to simulate the resistance of the aerodynamic medium to aircraft motion. However, there are few operational systems that attempt to simulate the motion of the pilot relative to the aircraft and the gravitoinertial vector. The design and use of all simulators is limited by poor understanding of postural control in the aircraft and its effect on the perception and control of flight. Analysis of the perception and control of flight (real or simulated) must consider that: (1) the pilot is not rigidly attached to the aircraft; and (2) the pilot actively monitors and adjusts body orientation and configuration in the aircraft. It is argued that this more complete approach to flight simulation requires that multimodal perception be considered as the rule rather than the exception. Moreover, the necessity of multimodal perception is revealed by emphasizing the complementarity rather than the redundancy among perceptual systems. Finally, an outline is presented for an experiment to be conducted at NASA ARC. The experiment explicitly considers possible consequences of coordination between postural and vehicular control.
Deciding what to see: the role of intention and attention in the perception of apparent motion.
Kohler, Axel; Haddad, Leila; Singer, Wolf; Muckli, Lars
2008-03-01
Apparent motion is an illusory perception of movement that can be induced by alternating presentations of static objects. Already in Wertheimer's early investigation of the phenomenon [Wertheimer, M. (1912). Experimentelle Studien über das Sehen von Bewegung. Zeitschrift fur Psychologie, 61, 161-265], he mentions that voluntary attention can influence the way in which an ambiguous apparent motion display is perceived. But until now, few studies have investigated how strong the modulation of apparent motion through attention can be under different stimulus and task conditions. We used bistable motion quartets of two different sizes, where the perception of vertical and horizontal motion is equally likely. Eleven observers participated in two experiments. In Experiment 1, participants were instructed to either (a) hold the current movement direction as long as possible, (b) passively view the stimulus, or (c) switch the movement directions as quickly as possible. With the respective instructions, observers could almost double phase durations in (a) and more than halve durations in (c) relative to the passive condition. This modulation effect was stronger for the large quartets. In Experiment 2, observers' attention was diverted from the stimulus by a detection task at fixation while they still had to report their conscious perception. This manipulation prolonged dominance durations for up to 100%. The experiments reveal a high susceptibility of ambiguous apparent motion to attentional modulation. We discuss how feature- and space-based attention mechanisms might contribute to those effects.
Minimization of Retinal Slip Cannot Explain Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Null, Cynthia H. (Technical Monitor)
1998-01-01
Existing models assume that pursuit attempts a direct minimization of retinal image motion or "slip" (e.g. Robinson et al., 1986; Krauzlis & Weisberger, 1989). Using occluded line-figure stimuli, we have previously shown that humans can accurately pursue stimuli for which perfect tracking does not zero retinal slip (Neurologic ARCO). These findings are inconsistent with the standard control strategy of matching eye motion to a target-motion signal reconstructed by adding retinal slip and eye motion, but consistent with a visual front-end which estimates target motion via a global spatio-temporal integration for pursuit and perception. Another possible explanation is that pursuit simply attempts to minimize slip perpendicular to the segments (and neglects parallel "sliding" motion). To resolve this, 4 observers (3 naive) were asked to pursue the center of 2 types of stimuli with identical velocity-space descriptions and matched motion energy. The line-figure "diamond" stimulus was viewed through 2 invisible 3 deg-wide vertical apertures (38 cd/m2 equal to background) such that only the sinusoidal motion of 4 oblique line segments (44 cd/m2 was visible. The "cross" was identical except that the segments exchanged positions. Two trajectories (8's and infinity's) with 4 possible initial directions were randomly interleaved (1.25 cycles, 2.5s period, Ax = Ay = 1.4 deg). In 91% of trials, the diamond appeared rigid. Correspondingly, pursuit was vigorous (mean Again: 0.74) with a V/H aspect ratio approx. 1 (mean: 0.9). Despite a valid rigid solution, the cross however appeared rigid in 8% of trials. Correspondingly, pursuit was weaker (mean Hgain: 0.38) with an incorrect aspect ratio (mean: 1.5). If pursuit were just minimizing perpendicular slip, performance would be the same in both conditions.
Altered perceptual sensitivity to kinematic invariants in Parkinson's disease.
Dayan, Eran; Inzelberg, Rivka; Flash, Tamar
2012-01-01
Ample evidence exists for coupling between action and perception in neurologically healthy individuals, yet the precise nature of the internal representations shared between these domains remains unclear. One experimentally derived view is that the invariant properties and constraints characterizing movement generation are also manifested during motion perception. One prominent motor invariant is the "two-third power law," describing the strong relation between the kinematics of motion and the geometrical features of the path followed by the hand during planar drawing movements. The two-thirds power law not only characterizes various movement generation tasks but also seems to constrain visual perception of motion. The present study aimed to assess whether motor invariants, such as the two thirds power law also constrain motion perception in patients with Parkinson's disease (PD). Patients with PD and age-matched controls were asked to observe the movement of a light spot rotating on an elliptical path and to modify its velocity until it appeared to move most uniformly. As in previous reports controls tended to choose those movements close to obeying the two-thirds power law as most uniform. Patients with PD displayed a more variable behavior, choosing on average, movements closer but not equal to a constant velocity. Our results thus demonstrate impairments in how the two-thirds power law constrains motion perception in patients with PD, where this relationship between velocity and curvature appears to be preserved but scaled down. Recent hypotheses on the role of the basal ganglia in motor timing may explain these irregularities. Alternatively, these impairments in perception of movement may reflect similar deficits in motor production.
Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow
Layton, Oliver W.; Fajen, Brett R.
2016-01-01
Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686
Contrast, contours and the confusion effect in dazzle camouflage.
Hogan, Benedict G; Scott-Samuel, Nicholas E; Cuthill, Innes C
2016-07-01
'Motion dazzle camouflage' is the name for the putative effects of highly conspicuous, often repetitive or complex, patterns on parameters important in prey capture, such as the perception of speed, direction and identity. Research into motion dazzle camouflage is increasing our understanding of the interactions between visual tracking, the confusion effect and defensive coloration. However, there is a paucity of research into the effects of contrast on motion dazzle camouflage: is maximal contrast a prerequisite for effectiveness? If not, this has important implications for our recognition of the phenotype and understanding of the function and mechanisms of potential motion dazzle camouflage patterns. Here we tested human participants' ability to track one moving target among many identical distractors with surface patterns designed to test the influence of these factors. In line with previous evidence, we found that targets with stripes parallel to the object direction of motion were hardest to track. However, reduction in contrast did not significantly influence this result. This finding may bring into question the utility of current definitions of motion dazzle camouflage, and means that some animal patterns, such as aposematic or mimetic stripes, may have previously unrecognized multiple functions.
Modification of Motion Perception and Manual Control Following Short-Durations Spaceflight
NASA Technical Reports Server (NTRS)
Wood, S. J.; Vanya, R. D.; Esteves, J. T.; Rupert, A. H.; Clement, G.
2011-01-01
Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination and spatial disorientation following G-transitions. This ESA-NASA study was designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short-duration spaceflights. The goals of this study were to (1) examine the effects of stimulus frequency on adaptive changes in motion perception during passive tilt and translation motion, (2) quantify decrements in manual control of tilt motion, and (3) evaluate vibrotactile feedback as a sensorimotor countermeasure.
Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.
Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno
2016-11-01
Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.
Self-motion Perception Training: Thresholds Improve in the Light but not in the Dark
Hartmann, Matthias; Furrer, Sarah; Herzog, Michael H.; Merfeld, Daniel M.; Mast, Fred W.
2014-01-01
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform, and asked to indicate the direction of motion. A total of eleven participants underwent 3360 practice trials, distributed over twelve (Experiment 1) or six days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness. PMID:23392475
Measurement of angular velocity in the perception of rotation.
Barraza, José F; Grzywacz, Norberto M
2002-09-01
Humans are sensitive to the parameters of translational motion, namely, direction and speed. At the same time, people have special mechanisms to deal with more complex motions, such as rotations and expansions. One wonders whether people may also be sensitive to the parameters of these complex motions. Here, we report on a series of experiments that explore whether human subjects can use angular velocity to evaluate how fast a rotational motion is. In four experiments, subjects were required to perform a task of speed-of-rotation discrimination by comparing two annuli of different radii in a temporal 2AFC paradigm. Results showed that humans could rely on a sensitive measurement of angular velocity to perform this discrimination task. This was especially true when the quality of the rotational signal was high (given by the number of dots composing the annulus). When the signal quality decreased, a bias towards linear velocity of 5-80% appeared, suggesting the existence of separate mechanisms for angular and linear velocity. This bias was independent from the reference radius. Finally, we asked whether the measurement of angular velocity required a rigid rotation, that is, whether the visual system makes only one global estimate of angular velocity. For this purpose, a random-dot disk was built such that all the dots were rotating with the same tangential speed, irrespectively of radius. Results showed that subjects do not estimate a unique global angular velocity, but that they perceive a non-rigid disk, with angular velocity falling inversely proportionally with radius.
Applications of computer-graphics animation for motion-perception research
NASA Technical Reports Server (NTRS)
Proffitt, D. R.; Kaiser, M. K.
1986-01-01
The advantages and limitations of using computer animated stimuli in studying motion perception are presented and discussed. Most current programs of motion perception research could not be pursued without the use of computer graphics animation. Computer generated displays afford latitudes of freedom and control that are almost impossible to attain through conventional methods. There are, however, limitations to this presentational medium. At present, computer generated displays present simplified approximations of the dynamics in natural events. Very little is known about how the differences between natural events and computer simulations influence perceptual processing. In practice, the differences are assumed to be irrelevant to the questions under study, and that findings with computer generated stimuli will generalize to natural events.
Heenan, Adam; Troje, Nikolaus F
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli.
Heenan, Adam; Troje, Nikolaus F.
2014-01-01
Biological motion stimuli, such as orthographically projected stick figure walkers, are ambiguous about their orientation in depth. The projection of a stick figure walker oriented towards the viewer, therefore, is the same as its projection when oriented away. Even though such figures are depth-ambiguous, however, observers tend to interpret them as facing towards them more often than facing away. Some have speculated that this facing-the-viewer bias may exist for sociobiological reasons: Mistaking another human as retreating when they are actually approaching could have more severe consequences than the opposite error. Implied in this hypothesis is that the facing-towards percept of biological motion stimuli is potentially more threatening. Measures of anxiety and the facing-the-viewer bias should therefore be related, as researchers have consistently found that anxious individuals display an attentional bias towards more threatening stimuli. The goal of this study was to assess whether physical exercise (Experiment 1) or an anxiety induction/reduction task (Experiment 2) would significantly affect facing-the-viewer biases. We hypothesized that both physical exercise and progressive muscle relaxation would decrease facing-the-viewer biases for full stick figure walkers, but not for bottom- or top-half-only human stimuli, as these carry less sociobiological relevance. On the other hand, we expected that the anxiety induction task (Experiment 2) would increase facing-the-viewer biases for full stick figure walkers only. In both experiments, participants completed anxiety questionnaires, exercised on a treadmill (Experiment 1) or performed an anxiety induction/reduction task (Experiment 2), and then immediately completed a perceptual task that allowed us to assess their facing-the-viewer bias. As hypothesized, we found that physical exercise and progressive muscle relaxation reduced facing-the-viewer biases for full stick figure walkers only. Our results provide further support that the facing-the-viewer bias for biological motion stimuli is related to the sociobiological relevance of such stimuli. PMID:24987956
Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.
Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan
2016-12-01
The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.
Neurophysiological and Behavioural Correlates of Coherent Motion Perception in Dyslexia
ERIC Educational Resources Information Center
Taroyan, Naira A.; Nicolson, Roderick I.; Buckley, David
2011-01-01
Coherent motion perception was tested in nine adolescents with dyslexia and 10 control participants matched for age and IQ using low contrast stimuli with three levels of coherence (10%, 25% and 40%). Event-related potentials (ERPs) and behavioural performance data were obtained. No significant between-group differences were found in performance…
Effects of changes in size, speed and distance on the perception of curved 3D trajectories
Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.
2012-01-01
Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
2013-12-01
brake reaction time on the EB test from pre-post while there was no significant change for the control group : t(38)=2.24, p=0.03. Tests of 3D motion...0.61). In experiment 2, the motion perception training group had a significant decrease in brake reaction time on the EB test from pre- to...the following. The experiment was divided into 8 phases: a pretest , six training blocks (once per week), and a posttest . Participants were allocated
Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
2016-01-01
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
Motion illusions in optical art presented for long durations are temporally distorted.
Nather, Francisco Carlos; Mecca, Fernando Figueiredo; Bueno, José Lino Oliveira
2013-01-01
Static figurative images implying human body movements observed for shorter and longer durations affect the perception of time. This study examined whether images of static geometric shapes would affect the perception of time. Undergraduate participants observed two Optical Art paintings by Bridget Riley for 9 or 36 s (group G9 and G36, respectively). Paintings implying different intensities of movement (2.0 and 6.0 point stimuli) were randomly presented. The prospective paradigm in the reproduction method was used to record time estimations. Data analysis did not show time distortions in the G9 group. In the G36 group the paintings were differently perceived: that for the 2.0 point one are estimated to be shorter than that for the 6.0 point one. Also for G36, the 2.0 point painting was underestimated in comparison with the actual time of exposure. Motion illusions in static images affected time estimation according to the attention given to the complexity of movement by the observer, probably leading to changes in the storage velocity of internal clock pulses.
What a Difference a Parameter Makes: a Psychophysical Comparison of Random Dot Motion Algorithms
Pilly, Praveen K.; Seitz, Aaron R.
2009-01-01
Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing. PMID:19336240
Global motion perception deficits in autism are reflected as early as primary visual cortex.
Robertson, Caroline E; Thomas, Cibu; Kravitz, Dwight J; Wallace, Gregory L; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I
2014-09-01
Individuals with autism are often characterized as 'seeing the trees, but not the forest'-attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15-27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Henry, Molly J.; McAuley, J. Devin
2013-01-01
A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners’ judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception. PMID:23936462
Henry, Molly J; McAuley, J Devin
2013-01-01
A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration) or pitch change of a comparison frequency glide relative to a standard (referent) glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt) of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners' judgments along the task-relevant dimension (pitch or time) were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception.
Moving from spatially segregated to transparent motion: a modelling approach
Durant, Szonya; Donoso-Barrera, Alejandra; Tan, Sovira; Johnston, Alan
2005-01-01
Motion transparency, in which patterns of moving elements group together to give the impression of lacy overlapping surfaces, provides an important challenge to models of motion perception. It has been suggested that we perceive transparent motion when the shape of the velocity histogram of the stimulus is bimodal. To investigate this further, random-dot kinematogram motion sequences were created to simulate segregated (perceptually spatially separated) and transparent (perceptually overlapping) motion. The motion sequences were analysed using the multi-channel gradient model (McGM) to obtain the speed and direction at every pixel of each frame of the motion sequences. The velocity histograms obtained were found to be quantitatively similar and all were bimodal. However, the spatial and temporal properties of the velocity field differed between segregated and transparent stimuli. Transparent stimuli produced patches of rightward and leftward motion that varied in location over time. This demonstrates that we can successfully differentiate between these two types of motion on the basis of the time varying local velocity field. However, the percept of motion transparency cannot be based simply on the presence of a bimodal velocity histogram. PMID:17148338
Perception of social interaction compresses subjective duration in an oxytocin-dependent manner
2018-01-01
Communication through body gestures permeates our daily life. Efficient perception of the message therein reflects one’s social cognitive competency. Here we report that such competency is manifested temporally as shortened subjective duration of social interactions: motion sequences showing agents acting communicatively are perceived to be significantly shorter in duration as compared with those acting noncommunicatively. The strength of this effect is negatively correlated with one’s autistic-like tendency. Critically, intranasal oxytocin administration restores the temporal compression effect in socially less proficient individuals, whereas the administration of atosiban, a competitive antagonist of oxytocin, diminishes the effect in socially proficient individuals. These findings indicate that perceived time, rather than being a faithful representation of physical time, is highly idiosyncratic and ingrained with one’s personality trait. Moreover, they suggest that oxytocin is involved in mediating time perception of social interaction, further supporting the role of oxytocin in human social cognition. PMID:29784084
Distance and Size Perception in Astronauts during Long-Duration Spaceflight
Clément, Gilles; Skinner, Anna; Lathan, Corinna
2013-01-01
Exposure to microgravity during spaceflight is known to elicit orientation illusions, errors in sensory localization, postural imbalance, changes in vestibulo-spinal and vestibulo-ocular reflexes, and space motion sickness. The objective of this experiment was to investigate whether an alteration in cognitive visual-spatial processing, such as the perception of distance and size of objects, is also taking place during prolonged exposure to microgravity. Our results show that astronauts on board the International Space Station exhibit biases in the perception of their environment. Objects’ heights and depths were perceived as taller and shallower, respectively, and distances were generally underestimated in orbit compared to Earth. These changes may occur because the perspective cues for depth are less salient in microgravity or the eye-height scaling of size is different when an observer is not standing on the ground. This finding has operational implications for human space exploration missions. PMID:25369884
Perception of social interaction compresses subjective duration in an oxytocin-dependent manner.
Liu, Rui; Yuan, Xiangyong; Chen, Kepu; Jiang, Yi; Zhou, Wen
2018-05-22
Communication through body gestures permeates our daily life. Efficient perception of the message therein reflects one's social cognitive competency. Here we report that such competency is manifested temporally as shortened subjective duration of social interactions: motion sequences showing agents acting communicatively are perceived to be significantly shorter in duration as compared with those acting noncommunicatively. The strength of this effect is negatively correlated with one's autistic-like tendency. Critically, intranasal oxytocin administration restores the temporal compression effect in socially less proficient individuals, whereas the administration of atosiban, a competitive antagonist of oxytocin, diminishes the effect in socially proficient individuals. These findings indicate that perceived time, rather than being a faithful representation of physical time, is highly idiosyncratic and ingrained with one's personality trait. Moreover, they suggest that oxytocin is involved in mediating time perception of social interaction, further supporting the role of oxytocin in human social cognition. © 2018, Liu et al.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Seeing the world topsy-turvy: The primary role of kinematics in biological motion inversion effects.
Fitzgerald, Sue-Anne; Brooks, Anna; van der Zwan, Rick; Blair, Duncan
2014-01-01
Physical inversion of whole or partial human body representations typically has catastrophic consequences on the observer's ability to perform visual processing tasks. Explanations usually focus on the effects of inversion on the visual system's ability to exploit configural or structural relationships, but more recently have also implicated motion or kinematic cue processing. Here, we systematically tested the role of both on perceptions of sex from upright and inverted point-light walkers. Our data suggest that inversion results in systematic degradations of the processing of kinematic cues. Specifically and intriguingly, they reveal sex-based kinematic differences: Kinematics characteristic of females generally are resistant to inversion effects, while those of males drive systematic sex misperceptions. Implications of the findings are discussed.
Recovery of biological motion perception and network plasticity after cerebellar tumor removal.
Sokolov, Arseny A; Erb, Michael; Grodd, Wolfgang; Tatagiba, Marcos S; Frackowiak, Richard S J; Pavlova, Marina A
2014-10-01
Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.
ERIC Educational Resources Information Center
Hirai, Masahiro; Hiraki, Kazuo
2006-01-01
We investigated how the spatiotemporal structure of animations of biological motion (BM) affects brain activity. We measured event-related potentials (ERPs) during the perception of BM under four conditions: normal spatial and temporal structure; scrambled spatial and normal temporal structure; normal spatial and scrambled temporal structure; and…
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
Shapiro, Arthur; Lu, Zhong-Lin; Huang, Chang-Bing; Knight, Emily; Ennis, Robert
2010-01-01
Background The human visual system does not treat all parts of an image equally: the central segments of an image, which fall on the fovea, are processed with a higher resolution than the segments that fall in the visual periphery. Even though the differences between foveal and peripheral resolution are large, these differences do not usually disrupt our perception of seamless visual space. Here we examine a motion stimulus in which the shift from foveal to peripheral viewing creates a dramatic spatial/temporal discontinuity. Methodology/Principal Findings The stimulus consists of a descending disk (global motion) with an internal moving grating (local motion). When observers view the disk centrally, they perceive both global and local motion (i.e., observers see the disk's vertical descent and the internal spinning). When observers view the disk peripherally, the internal portion appears stationary, and the disk appears to descend at an angle. The angle of perceived descent increases as the observer views the stimulus from further in the periphery. We examine the first- and second-order information content in the display with the use of a three-dimensional Fourier analysis and show how our results can be used to describe perceived spatial/temporal discontinuities in real-world situations. Conclusions/Significance The perceived shift of the disk's direction in the periphery is consistent with a model in which foveal processing separates first- and second-order motion information while peripheral processing integrates first- and second-order motion information. We argue that the perceived distortion may influence real-world visual observations. To this end, we present a hypothesis and analysis of the perception of the curveball and rising fastball in the sport of baseball. The curveball is a physically measurable phenomenon: the imbalance of forces created by the ball's spin causes the ball to deviate from a straight line and to follow a smooth parabolic path. However, the curveball is also a perceptual puzzle because batters often report that the flight of the ball undergoes a dramatic and nearly discontinuous shift in position as the ball nears home plate. We suggest that the perception of a discontinuous shift in position results from differences between foveal and peripheral processing. PMID:20967247
Can walking motions improve visually induced rotational self-motion illusions in virtual reality?
Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y
2015-02-04
Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.
Sandlund, Marlene; Dock, Katarina; Häger, Charlotte K; Waterworth, Eva Lindh
2012-01-01
To explore parents' perceptions of using low-cost motion interactive video games as home training for their children with mild/moderate cerebral palsy. Semi-structured interviews were carried out with parents from 15 families after participation in an intervention where motion interactive games were used daily in home training for their child. A qualitative content analysis approach was applied. The parents' perception of the training was very positive. They expressed the view that motion interactive video games may promote positive experiences of physical training in rehabilitation, where the social aspects of gaming were especially valued. Further, the parents experienced less need to take on coaching while gaming stimulated independent training. However, there was a desire for more controlled and individualized games to better challenge the specific rehabilitative need of each child. Low-cost motion interactive games may provide increased motivation and social interaction to home training and promote independent training with reduced coaching efforts for the parents. In future designs of interactive games for rehabilitation purposes, it is important to preserve the motivational and social features of games while optimizing the individualized physical exercise.
Central Inhibition Ability Modulates Attention-Induced Motion Blindness
ERIC Educational Resources Information Center
Milders, Maarten; Hay, Julia; Sahraie, Arash; Niedeggen, Michael
2004-01-01
Impaired motion perception can be induced in normal observers in a rapid serial visual presentation task. Essential for this effect is the presence of motion distractors prior to the motion target, and we proposed that this attention-induced motion blindness results from high-level inhibition produced by the distractors. To investigate this, we…
Self Motion Perception and Motion Sickness
NASA Technical Reports Server (NTRS)
Fox, Robert A. (Principal Investigator)
1991-01-01
The studies conducted in this research project examined several aspects of motion sickness in animal models. A principle objective of these studies was to investigate the neuroanatomy that is important in motion sickness with the objectives of examining both the utility of putative models and defining neural mechanisms that are important in motion sickness.
Age-related changes in perception of movement in driving scenes.
Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M
2014-07-01
Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Visual Depth from Motion Parallax and Eye Pursuit
Stroyan, Keith; Nawrot, Mark
2012-01-01
A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531
Chang, Michael; Halaki, Mark; Adams, Roger; Cobley, Stephen; Lee, Kwee-Yum; O'Dwyer, Nicholas
2016-01-01
In dance, the goals of actions are not always clearly defined. Investigations into the perceived quality of dance actions and their relation to biomechanical motion should give insight into the performance of dance actions and their goals. The purpose of this review was to explore and document current literature concerning dance perception and its relation to the biomechanics of motion. Seven studies were included in the review. The study results showed systematic differences between expert, non-expert, and novice dancers in biomechanical and perceptual measures, both of which also varied according to the actions expressed in dance. Biomechanical and perceptual variables were found to be correlated in all the studies in the review. Significant relations were observed between kinematic variables such as amplitude, speed, and variability of movement, and perceptual measures of beauty and performance quality. However, in general, there were no clear trends in these relations. Instead, the evidence suggests that perceptual ratings of dance may be specific to both the task (the skill of the particular action) and the context (the music and staging). The results also suggest that the human perceptual system is sensitive to skillful movements and neuromuscular coordination. Since the value perceived by audiences appears to be related to dance action goals and the coordination of dance elements, practitioners could place a priority on development and execution of those factors.
Gao, Tao; Scholl, Brian J.; McCarthy, Gregory
2012-01-01
Certain motion patterns can cause even simple geometric shapes to be perceived as animate. Viewing such displays evokes strong activation in temporoparietal cortex, including areas in and near the (predominantly right) posterior superior temporal sulcus (pSTS). These brain regions are sensitive to socially relevant information, but the nature of the social information represented in pSTS is unclear. For example, previous studies have been unable to explore the perception of shifting intentions, beyond animacy. This is due in part to the ubiquitous use of complex displays that combine several types of social information, with little ability to control lower-level visual cues. Here we address this challenge by manipulating intentionality with parametric precision while holding cues to animacy constant. Human subjects were exposed to a “wavering wolf” display, in which one item (the ‘wolf’) chased continuously, but its goal (i.e. the sheep) frequently switched among other shapes. By contrasting this with three other control displays, we find that the wolf’s changing intentions gave rise to strong selective activation in the right pSTS, compared with (1) a wolf that chases with a single unchanging intention; (2) very similar patterns of motion (and motion change) that are not perceived as goal-directed; and (3) abrupt onsets and offsets of moving objects. These results demonstrate in an especially well controlled manner that right pSTS is involved in social perception, beyond physical properties such as motion energy and salience. More importantly, these results demonstrate for the first time that this region represents perceived intentions, beyond animacy. PMID:23055497
Alpha oscillations correlate with the successful inhibition of unattended stimuli.
Händel, Barbara F; Haarmeier, Thomas; Jensen, Ole
2011-09-01
Because the human visual system is continually being bombarded with inputs, it is necessary to have effective mechanisms for filtering out irrelevant information. This is partly achieved by the allocation of attention, allowing the visual system to process relevant input while blocking out irrelevant input. What is the physiological substrate of attentional allocation? It has been proposed that alpha activity reflects functional inhibition. Here we asked if inhibition by alpha oscillations has behavioral consequences for suppressing the perception of unattended input. To this end, we investigated the influence of alpha activity on motion processing in two attentional conditions using magneto-encephalography. The visual stimuli used consisted of two random-dot kinematograms presented simultaneously to the left and right visual hemifields. Subjects were cued to covertly attend the left or right kinematogram. After 1.5 sec, a second cue tested whether subjects could report the direction of coherent motion in the attended (80%) or unattended hemifield (20%). Occipital alpha power was higher contralateral to the unattended side than to the attended side, thus suggesting inhibition of the unattended hemifield. Our key finding is that this alpha lateralization in the 20% invalidly cued trials did correlate with the perception of motion direction: Subjects with pronounced alpha lateralization were worse at detecting motion direction in the unattended hemifield. In contrast, lateralization did not correlate with visual discrimination in the attended visual hemifield. Our findings emphasize the suppressive nature of alpha oscillations and suggest that processing of inputs outside the field of attention is weakened by means of increased alpha activity.
Motor mapping of implied actions during perception of emotional body language.
Borgomaneri, Sara; Gazzola, Valeria; Avenanti, Alessio
2012-04-01
Perceiving and understanding emotional cues is critical for survival. Using the International Affective Picture System (IAPS) previous TMS studies have found that watching humans in emotional pictures increases motor excitability relative to seeing landscapes or household objects, suggesting that emotional cues may prime the body for action. Here we tested whether motor facilitation to emotional pictures may reflect the simulation of the human motor behavior implied in the pictures occurring independently of its emotional valence. Motor-evoked potentials (MEPs) to single-pulse TMS of the left motor cortex were recorded from hand muscles during observation and categorization of emotional and neutral pictures. In experiment 1 participants watched neutral, positive and negative IAPS stimuli, while in experiment 2, they watched pictures depicting human emotional (joyful, fearful), neutral body movements and neutral static postures. Experiment 1 confirms the increase in excitability for emotional IAPS stimuli found in previous research and shows, however, that more implied motion is perceived in emotional relative to neutral scenes. Experiment 2 shows that motor excitability and implied motion scores for emotional and neutral body actions were comparable and greater than for static body postures. In keeping with embodied simulation theories, motor response to emotional pictures may reflect the simulation of the action implied in the emotional scenes. Action simulation may occur independently of whether the observed implied action carries emotional or neutral meanings. Our study suggests the need of controlling implied motion when exploring motor response to emotional pictures of humans. Copyright © 2012 Elsevier Inc. All rights reserved.
Self-motion perception compresses time experienced in return travel.
Seno, Takeharu; Ito, Hiroyuki; Shoji, Sunaga
2011-01-01
It is often anecdotally reported that time experienced in return travel (back to the start point) seems shorter than time spent in outward travel (travel to a new destination). Here, we report the first experimental results showing that return travel time is experienced as shorter than the actual time. This discrepancy is induced by the existence of self-motion perception.
ERIC Educational Resources Information Center
Herring, Phillip Allen
2009-01-01
The purpose of the study was to analyze the science outreach program, Science In Motion (SIM), located in Mobile, Alabama. This research investigated what impact the SIM program has on student cognitive functioning and teacher efficacy and also investigated teacher perceptions and attitudes regarding the program. To investigate student…
ERIC Educational Resources Information Center
Johnson, Kerri L.; McKay, Lawrie S.; Pollick, Frank E.
2011-01-01
Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming…
Kim, Jejoong; Park, Sohee; Blake, Randolph
2011-01-01
Background Anomalous visual perception is a common feature of schizophrenia plausibly associated with impaired social cognition that, in turn, could affect social behavior. Past research suggests impairment in biological motion perception in schizophrenia. Behavioral and functional magnetic resonance imaging (fMRI) experiments were conducted to verify the existence of this impairment, to clarify its perceptual basis, and to identify accompanying neural concomitants of those deficits. Methodology/Findings In Experiment 1, we measured ability to detect biological motion portrayed by point-light animations embedded within masking noise. Experiment 2 measured discrimination accuracy for pairs of point-light biological motion sequences differing in the degree of perturbation of the kinematics portrayed in those sequences. Experiment 3 measured BOLD signals using event-related fMRI during a biological motion categorization task. Compared to healthy individuals, schizophrenia patients performed significantly worse on both the detection (Experiment 1) and discrimination (Experiment 2) tasks. Consistent with the behavioral results, the fMRI study revealed that healthy individuals exhibited strong activation to biological motion, but not to scrambled motion in the posterior portion of the superior temporal sulcus (STSp). Interestingly, strong STSp activation was also observed for scrambled or partially scrambled motion when the healthy participants perceived it as normal biological motion. On the other hand, STSp activation in schizophrenia patients was not selective to biological or scrambled motion. Conclusion Schizophrenia is accompanied by difficulties discriminating biological from non-biological motion, and associated with those difficulties are altered patterns of neural responses within brain area STSp. The perceptual deficits exhibited by schizophrenia patients may be an exaggerated manifestation of neural events within STSp associated with perceptual errors made by healthy observers on these same tasks. The present findings fit within the context of theories of delusion involving perceptual and cognitive processes. PMID:21625492
Synaptic Correlates of Low-Level Perception in V1.
Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves
2016-04-06
The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed to require attention-related processes, such as top-down feedback from higher cortical areas. Using intracellular techniques in the anesthetized cat and novel analysis methods, we reveal unexpected structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. These structural-functional biases provide a substrate, within V1, for contour detection and, more unexpectedly, global motion flow sensitivity at saccadic speed, even in the absence of attentional processes. We argue for the concept of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy changes with retinal flow statistics, and more generally for a renewed focus on intracortical computation. Copyright © 2016 the authors 0270-6474/16/363925-18$15.00/0.
Visual processing of rotary motion.
Werkhoven, P; Koenderink, J J
1991-01-01
Local descriptions of velocity fields (e.g., rotation, divergence, and deformation) contain a wealth of information for form perception and ego motion. In spite of this, human psychophysical performance in estimating these entities has not yet been thoroughly examined. In this paper, we report on the visual discrimination of rotary motion. A sequence of image frames is used to elicit an apparent rotation of an annulus, composed of dots in the frontoparallel plane, around a fixation spot at the center of the annulus. Differential angular velocity thresholds are measured as a function of the angular velocity, the diameter of the annulus, the number of dots, the display time per frame, and the number of frames. The results show a U-shaped dependence of angular velocity discrimination on spatial scale, with minimal Weber fractions of 7%. Experiments with a scatter in the distance of the individual dots to the center of rotation demonstrate that angular velocity cannot be assessed directly; perceived angular velocity depends strongly on the distance of the dots relative to the center of rotation. We suggest that the estimation of rotary motion is mediated by local estimations of linear velocity.
Path integration in tactile perception of shapes.
Moscatelli, Alessandro; Naceri, Abdeldjallil; Ernst, Marc O
2014-11-01
Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect. Copyright © 2014 Elsevier B.V. All rights reserved.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2017-04-15
In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential. Mice and non-human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies. Here we investigated the structure and statistics of the vestibular input experienced by mice versus non-human primates during natural behaviours, and found important differences. Our data establish that the structure and statistics of natural signals in non-human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input. These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self-motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self-motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self-motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power-law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self-motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self-motion stimuli are fundamentally different in rodents and primates. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2017-01-01
Key points In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential.Mice and non‐human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies.Here we investigated the structure and statistics of the vestibular input experienced by mice versus non‐human primates during natural behaviours, and found important differences.Our data establish that the structure and statistics of natural signals in non‐human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input.These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. Abstract It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self‐motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self‐motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self‐motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power‐law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self‐motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self‐motion stimuli are fundamentally different in rodents and primates. PMID:28083981
Music and mirror neurons: from motion to 'e'motion.
Molnar-Szakacs, Istvan; Overy, Katie
2006-12-01
The ability to create and enjoy music is a universal human trait and plays an important role in the daily life of most cultures. Music has a unique ability to trigger memories, awaken emotions and to intensify our social experiences. We do not need to be trained in music performance or appreciation to be able to reap its benefits-already as infants, we relate to it spontaneously and effortlessly. There has been a recent surge in neuroimaging investigations of the neural basis of musical experience, but the way in which the abstract shapes and patterns of musical sound can have such profound meaning to us remains elusive. Here we review recent neuroimaging evidence and suggest that music, like language, involves an intimate coupling between the perception and production of hierarchically organized sequential information, the structure of which has the ability to communicate meaning and emotion. We propose that these aspects of musical experience may be mediated by the human mirror neuron system.
Music and mirror neurons: from motion to ’e’motion
Overy, Katie
2006-01-01
The ability to create and enjoy music is a universal human trait and plays an important role in the daily life of most cultures. Music has a unique ability to trigger memories, awaken emotions and to intensify our social experiences. We do not need to be trained in music performance or appreciation to be able to reap its benefits—already as infants, we relate to it spontaneously and effortlessly. There has been a recent surge in neuroimaging investigations of the neural basis of musical experience, but the way in which the abstract shapes and patterns of musical sound can have such profound meaning to us remains elusive. Here we review recent neuroimaging evidence and suggest that music, like language, involves an intimate coupling between the perception and production of hierarchically organized sequential information, the structure of which has the ability to communicate meaning and emotion. We propose that these aspects of musical experience may be mediated by the human mirror neuron system. PMID:18985111
The development of a test methodology for the evaluation of EVA gloves
NASA Technical Reports Server (NTRS)
O'Hara, John M.; Cleland, John; Winfield, Dan
1988-01-01
This paper describes the development of a standardized set of tests designed to assess EVA-gloved hand capabilities in six measurement domains: range of motion, strength, tactile perception, dexterity, fatigue, and comfort. Based upon an assessment of general human-hand functioning and EVA task requirements, several tests within each measurement domain were developed to provide a comprehensive evaluation. All tests were designed to be conducted in a glove box with the bare hand as a baseline and the EVA glove at operating pressure.
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
The Verriest Lecture: Color lessons from space, time, and motion
Shevell, Steven K.
2012-01-01
The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398
Vaina, Lucia M.; Buonanno, Ferdinando; Rushton, Simon K.
2014-01-01
Background All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. Material/Methods We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. Results Patients’ performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR’s performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. Conclusions This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation. PMID:25183375
Motion and Actions in Language: Semantic Representations in Occipito-Temporal Cortex
ERIC Educational Resources Information Center
Humphreys, Gina F.; Newling, Katherine; Jennings, Caroline; Gennari, Silvia P.
2013-01-01
Understanding verbs typically activates posterior temporal regions and, in some circumstances, motion perception area V5. However, the nature and role of this activation remains unclear: does language alone indeed activate V5? And are posterior temporal representations modality-specific motion representations, or supra-modal motion-independent…
Cullen, Kathleen E.
2014-01-01
The vestibular system is vital for maintaining an accurate representation of self-motion. As one moves (or is moved) toward a new place in the environment, signals from the vestibular sensors are relayed to higher-order centers. It is generally assumed the vestibular system provides a veridical representation of head motion to these centers for the perception of self-motion and spatial memory. In support of this idea, evidence from lesion studies suggests that vestibular inputs are required for the directional tuning of head direction cells in the limbic system as well as neurons in areas of multimodal association cortex. However, recent investigations in monkeys and mice challenge the notion that early vestibular pathways encode an absolute representation of head motion. Instead, processing at the first central stage is inherently multimodal. This minireview highlights recent progress that has been made towards understanding how the brain processes and interprets self-motion signals encoded by the vestibular otoliths and semicircular canals during everyday life. The following interrelated questions are considered. What information is available to the higher-order centers that contribute to self-motion perception? How do we distinguish between our own self-generated movements and those of the external world? And lastly, what are the implications of differences in the processing of these active vs. passive movements for spatial memory? PMID:24454282
NASA Technical Reports Server (NTRS)
Riccio, Gary E.; McDonald, P. Vernon
1998-01-01
The purpose of this report is to identify the essential characteristics of goal-directed whole-body motion. The report is organized into three major sections (Sections 2, 3, and 4). Section 2 reviews general themes from ecological psychology and control-systems engineering that are relevant to the perception and control of whole-body motion. These themes provide an organizational framework for analyzing the complex and interrelated phenomena that are the defining characteristics of whole-body motion. Section 3 of this report applies the organization framework from the first section to the problem of perception and control of aircraft motion. This is a familiar problem in control-systems engineering and ecological psychology. Section 4 examines an essential but generally neglected aspect of vehicular control: coordination of postural control and vehicular control. To facilitate presentation of this new idea, postural control and its coordination with vehicular control are analyzed in terms of conceptual categories that are familiar in the analysis of vehicular control.
Motion facilitates face perception across changes in viewpoint and expression in older adults.
Maguinness, Corrina; Newell, Fiona N
2014-12-01
Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Engineering data compendium. Human perception and performance, volume 3
NASA Technical Reports Server (NTRS)
Boff, Kenneth R. (Editor); Lincoln, Janet E. (Editor)
1988-01-01
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 3, containing sections on Human Language Processing, Operator Motion Control, Effects of Environmental Stressors, Display Interfaces, and Control Interfaces (Real/Virtual).
Motion coherence and direction discrimination in healthy aging.
Pilz, Karin S; Miller, Louisa; Agnew, Hannah C
2017-01-01
Perceptual functions change with age, particularly motion perception. With regard to healthy aging, previous studies mostly measured motion coherence thresholds for coarse motion direction discrimination along cardinal axes of motion. Here, we investigated age-related changes in the ability to discriminate between small angular differences in motion directions, which allows for a more specific assessment of age-related decline and its underlying mechanisms. We first assessed older (>60 years) and younger (<30 years) participants' ability to discriminate coarse horizontal (left/right) and vertical (up/down) motion at 100% coherence and a stimulus duration of 400 ms. In a second step, we determined participants' motion coherence thresholds for vertical and horizontal coarse motion direction discrimination. In a third step, we used the individually determined motion coherence thresholds and tested fine motion direction discrimination for motion clockwise away from horizontal and vertical motion. Older adults performed as well as younger adults for discriminating motion away from vertical. Surprisingly, performance for discriminating motion away from horizontal was strongly decreased. Further analyses, however, showed a relationship between motion coherence thresholds for horizontal coarse motion direction discrimination and fine motion direction discrimination performance in older adults. In a control experiment, using motion coherence above threshold for all conditions, the difference in performance for horizontal and vertical fine motion direction discrimination for older adults disappeared. These results clearly contradict the notion of an overall age-related decline in motion perception, and, most importantly, highlight the importance of taking into account individual differences when assessing age-related changes in perceptual functions.
A selective impairment of perception of sound motion direction in peripheral space: A case study.
Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C
2016-01-08
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.
Does language guide event perception? Evidence from eye movements
Papafragou, Anna; Hulbert, Justin; Trueswell, John
2008-01-01
Languages differ in how they encode motion. When describing bounded motion, English speakers typically use verbs that convey information about manner (e.g., slide, skip, walk) rather than path (e.g., approach, ascend), whereas Greek speakers do the opposite. We investigated whether this strong cross-language difference influences how people allocate attention during motion perception. We compared eye movements from Greek and English speakers as they viewed motion events while (a) preparing verbal descriptions, or (b) memorizing the events. During the verbal description task, speakers’ eyes rapidly focused on the event components typically encoded in their native language, generating significant cross-language differences even during the first second of motion onset. However, when freely inspecting ongoing events, as in the memorization task, people allocated attention similarly regardless of the language they speak. Differences between language groups arose only after the motion stopped, such that participants spontaneously studied those aspects of the scene that their language does not routinely encode in verbs. These findings offer a novel perspective on the relation between language and perceptual/cognitive processes. They indicate that attention allocation during event perception is not affected by the perceiver’s native language; effects of language arise only when linguistic forms are recruited to achieve the task, such as when committing facts to memory. PMID:18395705
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning
2018-01-01
Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.
Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.
Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz
2017-06-01
Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.
Can biological motion research provide insight on how to reduce friendly fire incidents?
Steel, Kylie A; Baxter, David; Dogramaci, Sera; Cobley, Stephen; Ellem, Eathan
2016-10-01
The ability to accurately detect, perceive, and recognize biological motion can be associated with a fundamental drive for survival, and it is a significant interest for perception researchers. This field examines various perceptual features of motion and has been assessed and applied in several real-world contexts (e.g., biometric, sport). Unexplored applications still exist however, including the military issue of friendly fire. There are many causes and processes leading to friendly fire and specific challenges that are associated with visual information extraction during engagement, such as brief glimpses, low acuity, camouflage, and uniform deception. Furthermore, visual information must often be processed under highly stressful (potentially threatening), time-constrained conditions that present a significant problem for soldiers. Biological motion research and anecdotal evidence from experienced combatants suggests that intentions, emotions, identities of human motion can be identified and discriminated, even when visual display is degraded or limited. Furthermore, research suggests that perceptual discriminatory capability of movement under visually constrained conditions is trainable. Therefore, given the limited military research linked to biological motion and friendly fire, an opportunity for cross-disciplinary investigations exists. The focus of this paper is twofold: first, to provide evidence for the possible link between biological motion factors and friendly fire, and second, to propose conceptual and methodological considerations and recommendations for perceptual-cognitive training within current military programs.
The MPI Emotional Body Expressions Database for Narrative Scenarios
Volkova, Ekaterina; de la Rosa, Stephan; Bülthoff, Heinrich H.; Mohler, Betty
2014-01-01
Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth. PMID:25461382
A human mirror neuron system for language: Perspectives from signed languages of the deaf.
Knapp, Heather Patterson; Corina, David P
2010-01-01
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld. 2009 Elsevier Inc. All rights reserved.
Role of orientation reference selection in motion sickness
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Black, F. Owen
1990-01-01
Three areas related to human orientation control are investigated: (1) reflexes associated with the control of eye movements and posture; (2) the perception of body rotation and position with respect to gravity; and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. This process is referred as sensory selection. This proposal will attempt to quantify subject's sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms.
Seeing the world topsy-turvy: The primary role of kinematics in biological motion inversion effects
Fitzgerald, Sue-Anne; Brooks, Anna; van der Zwan, Rick; Blair, Duncan
2014-01-01
Physical inversion of whole or partial human body representations typically has catastrophic consequences on the observer's ability to perform visual processing tasks. Explanations usually focus on the effects of inversion on the visual system's ability to exploit configural or structural relationships, but more recently have also implicated motion or kinematic cue processing. Here, we systematically tested the role of both on perceptions of sex from upright and inverted point-light walkers. Our data suggest that inversion results in systematic degradations of the processing of kinematic cues. Specifically and intriguingly, they reveal sex-based kinematic differences: Kinematics characteristic of females generally are resistant to inversion effects, while those of males drive systematic sex misperceptions. Implications of the findings are discussed. PMID:25469217
Froese, Tom; Leavens, David A.
2014-01-01
We argue that imitation is a learning response to unintelligible actions, especially to social conventions. Various strands of evidence are converging on this conclusion, but further progress has been hampered by an outdated theory of perceptual experience. Comparative psychology continues to be premised on the doctrine that humans and non-human primates only perceive others’ physical “surface behavior,” while mental states are perceptually inaccessible. However, a growing consensus in social cognition research accepts the direct perception hypothesis: primarily we see what others aim to do; we do not infer it from their motions. Indeed, physical details are overlooked – unless the action is unintelligible. On this basis we hypothesize that apes’ propensity to copy the goal of an action, rather than its precise means, is largely dependent on its perceived intelligibility. Conversely, children copy means more often than adults and apes because, uniquely, much adult human behavior is completely unintelligible to unenculturated observers due to the pervasiveness of arbitrary social conventions, as exemplified by customs, rituals, and languages. We expect the propensity to imitate to be inversely correlated with the familiarity of cultural practices, as indexed by age and/or socio-cultural competence. The direct perception hypothesis thereby helps to parsimoniously explain the most important findings of imitation research, including children’s over-imitation and other species-typical and age-related variations. PMID:24600413
Computational validation of the motor contribution to speech perception.
Badino, Leonardo; D'Ausilio, Alessandro; Fadiga, Luciano; Metta, Giorgio
2014-07-01
Action perception and recognition are core abilities fundamental for human social interaction. A parieto-frontal network (the mirror neuron system) matches visually presented biological motion information onto observers' motor representations. This process of matching the actions of others onto our own sensorimotor repertoire is thought to be important for action recognition, providing a non-mediated "motor perception" based on a bidirectional flow of information along the mirror parieto-frontal circuits. State-of-the-art machine learning strategies for hand action identification have shown better performances when sensorimotor data, as opposed to visual information only, are available during learning. As speech is a particular type of action (with acoustic targets), it is expected to activate a mirror neuron mechanism. Indeed, in speech perception, motor centers have been shown to be causally involved in the discrimination of speech sounds. In this paper, we review recent neurophysiological and machine learning-based studies showing (a) the specific contribution of the motor system to speech perception and (b) that automatic phone recognition is significantly improved when motor data are used during training of classifiers (as opposed to learning from purely auditory data). Copyright © 2014 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Parker, Donald E.
1987-01-01
Seven astronauts reported translational self-motion during roll simulation 1-3 h after landing following 5-7 d of orbital flight. Two reported strong translational self-motion perception when they performed pitch head motions during entry and while the orbiter was stationary on the runway. One of two astronauts from whom adequate data were collected exhibited a 132-deg shift in the phase angle between roll stimulation and horizontal eye position 2 h after landing. Neither of two from whom adequate data were collected exhibited increased horizontal eye movement amplitude or disturbance of voluntary pitch or roll body motion immediately postflight. These results are generally consistent with an otolith tilt-translation reinterpretation model and are being applied to the development of apparatus and procedures intended to preadapt astronauts to the sensory rearrangement of weightlessness.
Auditorily-induced illusory self-motion: a review.
Väljamäe, Aleksander
2009-10-01
The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area.
Future of Mechatronics and Human
NASA Astrophysics Data System (ADS)
Harashima, Fumio; Suzuki, Satoshi
This paper mentions circumstance of mechatronics that sustain our human society, and introduces HAM(Human Adaptive Mechatronics)-project as one of research projects to create new human-machine system. The key point of HAM is skill, and analysis of skill and establishment of assist method to enhance total performance of human-machine system are main research concerns. As study of skill is an elucidation of human itself, analyses of human higher function are significant. In this paper, after surveying researches of human brain functions, an experimental analysis of human characteristic in machine operation is shown as one example of our research activities. We used hovercraft simulator as verification system including observation, voluntary motion control and machine operation that are needed to general machine operation. Process and factors to become skilled were investigated by identification of human control characteristics with measurement of the operator's line-of sight. It was confirmed that early switching of sub-controllers / reference signals in human and enhancement of space perception are significant.
Suppressive mechanisms in visual motion processing: from perception to intelligence
Tadin, Duje
2015-01-01
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and those with schizophrenia—a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. PMID:26299386
Self-Motion Perception: Assessment by Real-Time Computer Generated Animations
NASA Technical Reports Server (NTRS)
Parker, Donald E.
1999-01-01
Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?
Modeling depth from motion parallax with the motion/pursuit ratio
Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith
2014-01-01
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926
Congruity Effects in Time and Space: Behavioral and ERP Measures
ERIC Educational Resources Information Center
Teuscher, Ursina; McQuire, Marguerite; Collins, Jennifer; Coulson, Seana
2008-01-01
Two experiments investigated whether motion metaphors for time affected the perception of spatial motion. Participants read sentences either about literal motion through space or metaphorical motion through time written from either the ego-moving or object-moving perspective. Each sentence was followed by a cartoon clip. Smiley-moving clips showed…
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Birznieks, I.; Vickery, R. M.; Holcombe, A. O.; Seizova-Cajic, T.
2016-01-01
Neurophysiological studies in primates have found that direction-sensitive neurons in the primary somatosensory cortex (SI) generally increase their response rate with increasing speed of object motion across the skin and show little evidence of speed tuning. We employed psychophysics to determine whether human perception of motion direction could be explained by features of such neurons and whether evidence can be found for a speed-tuned process. After adaptation to motion across the skin, a subsequently presented dynamic test stimulus yields an impression of motion in the opposite direction. We measured the strength of this tactile motion aftereffect (tMAE) induced with different combinations of adapting and test speeds. Distal-to-proximal or proximal-to-distal adapting motion was applied to participants' index fingers using a tactile array, after which participants reported the perceived direction of a bidirectional test stimulus. An intensive code for speed, like that observed in SI neurons, predicts greater adaptation (and a stronger tMAE) the faster the adapting speed, regardless of the test speed. In contrast, speed tuning of direction-sensitive neurons predicts the greatest tMAE when the adapting and test stimuli have matching speeds. We found that the strength of the tMAE increased monotonically with adapting speed, regardless of the test speed, showing no evidence of speed tuning. Our data are consistent with neurophysiological findings that suggest an intensive code for speed along the motion processing pathways comprising neurons sensitive both to speed and direction of motion. PMID:26823511
Back from the future: Volitional postdiction of perceived apparent motion direction.
Sun, Liwei; Frank, Sebastian M; Hartstein, Kevin C; Hassan, Wassim; Tse, Peter U
2017-11-01
Among physical events, it is impossible that an event could alter its own past for the simple reason that past events precede future events, and not vice versa. Moreover, to do so would invoke impossible self-causation. However, mental events are constructed by physical neuronal processes that take a finite duration to execute. Given this fact, it is conceivable that later brain events could alter the ongoing interpretation of previous brain events if they arrive within this finite duration of interpretive processing, before a commitment is made to what happened. In the current study, we show that humans can volitionally influence how they perceive an ambiguous apparent motion sequence, as long as the top-down command occurs up to 300ms after the occurrence of the actual motion event in the world. This finding supports the view that there is a temporal integration period over which perception is constructed on the basis of both bottom-up and top-down inputs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter
2002-06-01
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
The specificity of cortical region KO to depth structure.
Tyler, Christopher W; Likova, Lora T; Kontsevich, Leonid L; Wade, Alex R
2006-03-01
Functional MRI studies have identified a cortical region designated as KO between retinotopic areas V3A/B and motion area V5 in human cortex as particularly responsive to motion-defined or kinetic borders. To determine the response of the KO region to more general aspects of structure, we used stereoscopic depth borders and disparate planes with no borders, together with three stimulus types that evoked no depth percept: luminance borders, line contours and illusory phase borders. Responses to these stimuli in the KO region were compared with the responses in retinotopically defined areas that have been variously associated with disparity processing in neurophysiological and fMRI studies. The strongest responses in the KO region were to stimuli evoking perceived depth structure from either disparity or motion cues, but it showed negligible responses either to luminance-based contour stimuli or to edgeless disparity stimuli. We conclude that the region designated as KO is best regarded as a primary center for the generic representation of depth structure rather than any kind of contour specificity.
Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.
Cowlagi, Raghvendra V; Tsiotras, Panagiotis
2012-10-01
We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.
The role of temporo-parietal junction (TPJ) in global Gestalt perception.
Huberle, Elisabeth; Karnath, Hans-Otto
2012-07-01
Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.
Audio aided electro-tactile perception training for finger posture biofeedback.
Vargas, Jose Gonzalez; Yu, Wenwei
2008-01-01
Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.
Male dance moves that catch a woman's eye
Neave, Nick; McCarty, Kristofor; Freynik, Jeanette; Caplan, Nicholas; Hönekopp, Johannes; Fink, Bernhard
2011-01-01
Male movements serve as courtship signals in many animal species, and may honestly reflect the genotypic and/or phenotypic quality of the individual. Attractive human dance moves, particularly those of males, have been reported to show associations with measures of physical strength, prenatal androgenization and symmetry. Here we use advanced three-dimensional motion-capture technology to identify possible biomechanical differences between women's perceptions of ‘good’ and ‘bad’ male dancers. Nineteen males were recorded using the ‘Vicon’ motion-capture system while dancing to a basic rhythm; controlled stimuli in the form of avatars were then created in the form of 15 s video clips, and rated by 39 females for dance quality. Initial analyses showed that 11 movement variables were significantly positively correlated with perceived dance quality. Linear regression subsequently revealed that three movement measures were key predictors of dance quality; these were variability and amplitude of movements of the neck and trunk, and speed of movements of the right knee. In summary, we have identified specific movements within men's dance that influence women's perceptions of dancing ability. We suggest that such movements may form honest signals of male quality in terms of health, vigour or strength, though this remains to be confirmed. PMID:20826469
Local and global aspects of biological motion perception in children born at very low birth weight
Williamson, K. E.; Jakobson, L. S.; Saunders, D. R.; Troje, N. F.
2015-01-01
Biological motion perception can be assessed using a variety of tasks. In the present study, 8- to 11-year-old children born prematurely at very low birth weight (<1500 g) and matched, full-term controls completed tasks that required the extraction of local motion cues, the ability to perceptually group these cues to extract information about body structure, and the ability to carry out higher order processes required for action recognition and person identification. Preterm children exhibited difficulties in all 4 aspects of biological motion perception. However, intercorrelations between test scores were weak in both full-term and preterm children—a finding that supports the view that these processes are relatively independent. Preterm children also displayed more autistic-like traits than full-term peers. In preterm (but not full-term) children, these traits were negatively correlated with performance in the task requiring structure-from-motion processing, r(30) = −.36, p < .05), but positively correlated with the ability to extract identity, r(30) = .45, p < .05). These findings extend previous reports of vulnerability in systems involved in processing dynamic cues in preterm children and suggest that a core deficit in social perception/cognition may contribute to the development of the social and behavioral difficulties even in members of this population who are functioning within the normal range intellectually. The results could inform the development of screening, diagnostic, and intervention tools. PMID:25103588
Hip proprioceptive feedback influences the control of mediolateral stability during human walking
Roden-Reynolds, Devin C.; Walker, Megan H.; Wasserman, Camille R.
2015-01-01
Active control of the mediolateral location of the feet is an important component of a stable bipedal walking pattern, although the roles of sensory feedback in this process are unclear. In the present experiments, we tested whether hip abductor proprioception influenced the control of mediolateral gait motion. Participants performed a series of quiet standing and treadmill walking trials. In some trials, 80-Hz vibration was applied intermittently over the right gluteus medius (GM) to evoke artificial proprioceptive feedback. During walking, the GM was vibrated during either right leg stance (to elicit a perception that the pelvis was closer mediolaterally to the stance foot) or swing (to elicit a perception that the swing leg was more adducted). Vibration during quiet standing evoked leftward sway in most participants (13 of 16), as expected from its predicted perceptual effects. Across the 13 participants sensitive to vibration, stance phase vibration caused the contralateral leg to be placed significantly closer to the midline (by ∼2 mm) at the end of the ongoing step. In contrast, swing phase vibration caused the vibrated leg to be placed significantly farther mediolaterally from the midline (by ∼2 mm), whereas the pelvis was held closer to the stance foot (by ∼1 mm). The estimated mediolateral margin of stability was thus decreased by stance phase vibration but increased by swing phase vibration. Although the observed effects of vibration were small, they were consistent with humans monitoring hip proprioceptive feedback while walking to maintain stable mediolateral gait motion. PMID:26289467
Haptic exploration of fingertip-sized geometric features using a multimodal tactile sensor
NASA Astrophysics Data System (ADS)
Ponce Wong, Ruben D.; Hellman, Randall B.; Santos, Veronica J.
2014-06-01
Haptic perception remains a grand challenge for artificial hands. Dexterous manipulators could be enhanced by "haptic intelligence" that enables identification of objects and their features via touch alone. Haptic perception of local shape would be useful when vision is obstructed or when proprioceptive feedback is inadequate, as observed in this study. In this work, a robot hand outfitted with a deformable, bladder-type, multimodal tactile sensor was used to replay four human-inspired haptic "exploratory procedures" on fingertip-sized geometric features. The geometric features varied by type (bump, pit), curvature (planar, conical, spherical), and footprint dimension (1.25 - 20 mm). Tactile signals generated by active fingertip motions were used to extract key parameters for use as inputs to supervised learning models. A support vector classifier estimated order of curvature while support vector regression models estimated footprint dimension once curvature had been estimated. A distal-proximal stroke (along the long axis of the finger) enabled estimation of order of curvature with an accuracy of 97%. Best-performing, curvature-specific, support vector regression models yielded R2 values of at least 0.95. While a radial-ulnar stroke (along the short axis of the finger) was most helpful for estimating feature type and size for planar features, a rolling motion was most helpful for conical and spherical features. The ability to haptically perceive local shape could be used to advance robot autonomy and provide haptic feedback to human teleoperators of devices ranging from bomb defusal robots to neuroprostheses.
On the road to somewhere: Brain potentials reflect language effects on motion event perception.
Flecken, Monique; Athanasopoulos, Panos; Kuipers, Jan Rouke; Thierry, Guillaume
2015-08-01
Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded event-related brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Heading Tuning in Macaque Area V6.
Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E
2015-12-16
Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.
The development of global motion discrimination in school aged children
Bogfjellmo, Lotte-Guri; Bex, Peter J.; Falkenberg, Helle K.
2014-01-01
Global motion perception matures during childhood and involves the detection of local directional signals that are integrated across space. We examine the maturation of local directional selectivity and global motion integration with an equivalent noise paradigm applied to direction discrimination. One hundred and three observers (6–17 years) identified the global direction of motion in a 2AFC task. The 8° central stimuli consisted of 100 dots of 10% Michelson contrast moving 2.8°/s or 9.8°/s. Local directional selectivity and global sampling efficiency were estimated from direction discrimination thresholds as a function of external directional noise, speed, and age. Direction discrimination thresholds improved gradually until the age of 14 years (linear regression, p < 0.05) for both speeds. This improvement was associated with a gradual increase in sampling efficiency (linear regression, p < 0.05), with no significant change in internal noise. Direction sensitivity was lower for dots moving at 2.8°/s than at 9.8°/s for all ages (paired t test, p < 0.05) and is mainly due to lower sampling efficiency. Global motion perception improves gradually during development and matures by age 14. There was no change in internal noise after the age of 6, suggesting that local direction selectivity is mature by that age. The improvement in global motion perception is underpinned by a steady increase in the efficiency with which direction signals are pooled, suggesting that global motion pooling processes mature for longer and later than local motion processing. PMID:24569985
Meier, Kimberly; Sum, Brian; Giaschi, Deborah
2016-10-01
Global motion sensitivity in typically developing children depends on the spatial (Δx) and temporal (Δt) displacement parameters of the motion stimulus. Specifically, sensitivity for small Δx values matures at a later age, suggesting it may be the most vulnerable to damage by amblyopia. To explore this possibility, we compared motion coherence thresholds of children with amblyopia (7-14years old) to age-matched controls. Three Δx values were used with two Δt values, yielding six conditions covering a range of speeds (0.3-30deg/s). We predicted children with amblyopia would show normal coherence thresholds for the same parameters on which 5-year-olds previously demonstrated mature performance, and elevated coherence thresholds for parameters on which 5-year-olds demonstrated immaturities. Consistent with this, we found that children with amblyopia showed deficits with amblyopic eye viewing compared to controls for small and medium Δx values, regardless of Δt value. The fellow eye showed similar results at the smaller Δt. These results confirm that global motion perception in children with amblyopia is particularly deficient at the finer spatial scales that typically mature later in development. An additional implication is that carefully designed stimuli that are adequately sensitive must be used to assess global motion function in developmental disorders. Stimulus parameters for which performance matures early in life may not reveal global motion perception deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
The role of eye movements in depth from motion parallax during infancy
Nawrot, Elizabeth; Nawrot, Mark
2013-01-01
Motion parallax is a motion-based, monocular depth cue that uses an object's relative motion and velocity as a cue to relative depth. In adults, and in monkeys, a smooth pursuit eye movement signal is used to disambiguate the depth-sign provided by these relative motion cues. The current study investigates infants' perception of depth from motion parallax and the development of two oculomotor functions, smooth pursuit and the ocular following response (OFR) eye movements. Infants 8 to 20 weeks of age were presented with three tasks in a single session: depth from motion parallax, smooth pursuit tracking, and OFR to translation. The development of smooth pursuit was significantly related to age, as was sensitivity to motion parallax. OFR eye movements also corresponded to both age and smooth pursuit gain, with groups of infants demonstrating asymmetric function in both types of eye movements. These results suggest that the development of the eye movement system may play a crucial role in the sensitivity to depth from motion parallax in infancy. Moreover, describing the development of these oculomotor functions in relation to depth perception may aid in the understanding of certain visual dysfunctions. PMID:24353309
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2012-01-01
Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…
Pictorial communication in virtual and real environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor)
1991-01-01
Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)
Aural-Visual-Kinesthetic Imagery in Motion Media.
ERIC Educational Resources Information Center
Allan, David W.
Motion media refers to film, television, and other forms of kinesthetic media including computerized multimedia technologies and virtual reality. Imagery reproduced by motion media carries a multisensory amalgamation of mental experiences. The blending of these experiences phenomenologically intersects with the reality and perception of words,…
Visible propagation from invisible exogenous cueing.
Lin, Zhicheng; Murray, Scott O
2013-09-20
Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.
The application of biological motion research: biometrics, sport, and the military.
Steel, Kylie; Ellem, Eathan; Baxter, David
2015-02-01
The body of research that examines the perception of biological motion is extensive and explores the factors that are perceived from biological motion and how this information is processed. This research demonstrates that individuals are able to use relative (temporal and spatial) information from a person's movement to recognize factors, including gender, age, deception, emotion, intention, and action. The research also demonstrates that movement presents idiosyncratic properties that allow individual discrimination, thus providing the basis for significant exploration in the domain of biometrics and social signal processing. Medical forensics, safety garments, and victim selection domains also have provided a history of research on the perception of biological motion applications; however, a number of additional domains present opportunities for application that have not been explored in depth. Therefore, the purpose of this paper is to present an overview of the current applications of biological motion-based research and to propose a number of areas where biological motion research, specific to recognition, could be applied in the future.
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Shaking Takete and Flowing Maluma. Non-Sense Words Are Associated with Motion Patterns
Koppensteiner, Markus; Stephan, Pia; Jäschke, Johannes Paul Michael
2016-01-01
People assign the artificial words takete and kiki to spiky, angular figures and the artificial words maluma and bouba to rounded figures. We examined whether such a cross-modal correspondence could also be found for human body motion. We transferred the body movements of speakers onto two-dimensional coordinates and created animated stick-figures based on this data. Then we invited people to judge these stimuli using the words takete-maluma, bouba-kiki, and several verbal descriptors that served as measures of angularity/smoothness. In addition to this we extracted the quantity of motion, the velocity of motion and the average angle between motion vectors from the coordinate data. Judgments of takete (and kiki) were related to verbal descriptors of angularity, a high quantity of motion, high velocity and sharper angles. Judgments of maluma (or bouba) were related to smooth movements, a low velocity, a lower quantity of motion and blunter angles. A forced-choice experiment during which we presented subsets with low and high rankers on our motion measures revealed that people preferably assigned stimuli displaying fast movements with sharp angles in motion vectors to takete and stimuli displaying slow movements with blunter angles in motion vectors to maluma. Results indicated that body movements share features with information inherent in words such as takete and maluma and that people perceive the body movements of speakers on the level of changes in motion direction (e.g., body moves to the left and then back to the right). Follow-up studies are needed to clarify whether impressions of angularity and smoothness have similar communicative values across different modalities and how this affects social judgments and person perception. PMID:26939013
Perceived spatial displacement of motion-defined contours in peripheral vision.
Fan, Zhao; Harris, John
2008-12-01
The perceived displacement of motion-defined contours in peripheral vision was examined in four experiments. In Experiment 1, in line with Ramachandran and Anstis' finding [Ramachandran, V. S., & Anstis, S. M. (1990). Illusory displacement of equiluminous kinetic edges. Perception, 19, 611-616], the border between a field of drifting dots and a static dot pattern was apparently displaced in the same direction as the movement of the dots. When a uniform dark area was substituted for the static dots, a similar displacement was found, but this was smaller and statistically insignificant. In Experiment 2, the border between two fields of dots moving in opposite directions was displaced in the direction of motion of the dots in the more eccentric field, so that the location of a boundary defined by a diverging pattern is perceived as more eccentric, and that defined by a converging as less eccentric. Two explanations for this effect (that the displacement reflects a greater weight given to the more eccentric motion, or that the region containing stronger centripetal motion components expands perceptually into that containing centrifugal motion) were tested in Experiment 3, by varying the velocity of the more eccentric region. The results favoured the explanation based on the expansion of an area in centripetal motion. Experiment 4 showed that the difference in perceived location was unlikely to be due to differences in the discriminability of contours in diverging and converging patterns, and confirmed that this effect is due to a difference between centripetal and centrifugal motion rather than motion components in other directions. Our result provides new evidence for a bias towards centripetal motion in human vision, and suggests that the direction of motion-induced displacement of edges is not always in the direction of an adjacent moving pattern.
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Verstraten, Frans A J; Niehorster, Diederick C; van de Grind, Wim A; Wade, Nicholas J
2015-10-01
In his original contribution, Exner's principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent-so far unknown-contributions in Exner's paper. For example, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth.
Niehorster, Diederick C.; van de Grind, Wim A.; Wade, Nicholas J.
2015-01-01
In his original contribution, Exner’s principal concern was a comparison between the properties of different aftereffects, and particularly to determine whether aftereffects of motion were similar to those of color and whether they could be encompassed within a unified physiological framework. Despite the fact that he was unable to answer his main question, there are some excellent—so far unknown—contributions in Exner’s paper. For example, he describes observations that can be related to binocular interaction, not only in motion aftereffects but also in rivalry. To the best of our knowledge, Exner provides the first description of binocular rivalry induced by differently moving patterns in each eye, for motion as well as for their aftereffects. Moreover, apart from several known, but beautifully addressed, phenomena he makes a clear distinction between motion in depth based on stimulus properties and motion in depth based on the interpretation of motion. That is, the experience of movement, as distinct from the perception of movement. The experience, unlike the perception, did not result in a motion aftereffect in depth. PMID:27648213
How imagery changes self-motion perception
Nigmatullina, Y.; Arshad, Q.; Wu, K.; Seemungal, B.M.; Bronstein, A.M.; Soto, D.
2015-01-01
Imagery and perception are thought to be tightly linked, however, little is known about the interaction between imagery and the vestibular sense, in particular, self-motion perception. In this study, the observers were seated in the dark on a motorized chair that could rotate either to the right or to the left. Prior to the physical rotation, observers were asked to imagine themselves rotating leftward or rightward. We found that if the direction of imagined rotation was different to the physical rotation of the chair (incongruent trials), the velocity of the chair needed to be higher for observers to experience themselves rotating relative to when the imagined and the physical rotation matched (on congruent trials). Accordingly, the vividness of imagined rotations was reduced on incongruent relative to congruent trials. Notably, we found that similar effects of imagery were found at the earliest stages of vestibular processing, namely, the onset of the vestibular–ocular reflex was modulated by the congruency between physical and imagined rotations. Together, the results demonstrate that mental imagery influences self-motion perception by exerting top-down influences over the earliest vestibular response and subsequent perceptual decision-making. PMID:25637805
Hsu, Patty; Taylor, J Eric T; Pratt, Jay
2015-01-01
The Ternus effect is a robust illusion of motion that produces element motion at short interstimulus intervals (ISIs; < 50 ms) and group motion at longer ISIs (> 50 ms). Previous research has shown that the nature of the stimuli (e.g., similarity, grouping), not just ISI, can influence the likelihood of perceiving element or group motion. We examined if semantic knowledge can also influence what type of illusory motion is perceived. In Experiment I, we used a modified Ternus display with pictures of frogs in a jump-ready pose facing forwards or backwards to the direction of illusory motion. Participants perceived more element motion with the forward-facing frogs and more group motion with the backward-facing frogs. Experiment 2 tested whether this effect would still occur with line drawings of frogs, or if a more life-like image was necessary. Experiment 3 tested whether this effect was due to visual asymmetries inherent in the jumping pose. Experiment 4 tested whether frogs in a "non-jumping," sedentary pose would replicate the original effect. These experiments elucidate the role of semantic knowledge in the Ternus effect. Prior knowledge of the movement of certain animate objects, in this case, frogs can also bias the perception of element or group motion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Cappa, Frédéric; Rinaldi, Antonio P.
In this paper, we present model simulations of ground motions caused by CO 2 -injection-induced fault reactivation and analyze the results in terms of the potential for damage to ground surface structures and nuisance to the local human population. It is an integrated analysis from cause to consequence, including the whole chain of processes starting from earthquake inception in the subsurface, wave propagation toward the ground surface, and assessment of the consequences of ground vibration. For a small magnitude (M w =3) event at a hypocenter depth of about 1000m, we first used the simulated ground-motion wave train in anmore » inverse analysis to estimate source parameters (moment magnitude, rupture dimensions and stress drop), achieving good agreement and thereby verifying the modeling of the chain of processes from earthquake inception to ground vibration. We then analyzed the ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV) and frequency content, with comparison to U.S. Geological Survey's instrumental intensity scales for earthquakes and the U.S. Bureau of Mines' vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. Our results confirm the appropriateness of using PGV (rather than PGA) and frequency for the evaluation of potential ground-vibration effects on structures and humans from shallow injection-induced seismic events. For the considered synthetic M w =3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, but would certainly be felt by the local population.« less
A Role for MST Neurons in Heading Estimation
NASA Technical Reports Server (NTRS)
Stone, Leland Scott; Perrone, J. A.; Wade, Charles E. (Technical Monitor)
1994-01-01
A template model of human visual self-motion perception (Perrone, JOSA, 1992; Perrone & Stone, Vis. Res., in press), which uses neurophysiologically realistic "heading detectors", is consistent with numerous human psychophysical results (Warren & Hannon, Nature, 1988; Stone & Perrone, Neuro. Abstr., 1991) including the failure of humans to estimate their heading (direction of forward translation) accurately under certain visual conditions (Royden et al., Nature, 1992). We tested the model detectors with stimuli used by others in- single-unit studies. The detectors showed emergent properties similar to those of MST neurons: 1) Sensitivity to non-preferred flow. Each detector is tuned to a specific combination of flow components and its response is systematically reduced by the addition of nonpreferred flow (Orban et al., PNAS, 1992), and 2) Position invariance. The detectors maintain their apparent preference for particular flow components over large regions of their receptive fields (e.g. Duffy & Wurtz, J. Neurophys., 1991; Graziano et al., J. Neurosci., 1994). It has been argued that this latter property is incompatible with MST playing a role in heading perception. The model however demonstrates how neurons with the above response properties could still support accurate heading estimation within extrastriate cortical maps.
Life sciences experiments on Spacelab 1
NASA Technical Reports Server (NTRS)
Buderer, M. C.; Salinas, G. A.
1980-01-01
The objectives and procedures regarding various biological experiments to be conducted on Spacelab 1 are reviewed. These include the mapping of the HZE cosmic ray particle flux within the Spacelab module, investigating the effects of nullgravity on circadian cycles in the slime mold, Neurospora crassa, and measuring nutations of the dwarf sunflower, Helianthus annus. Emphasis is placed on research regarding possible changes in vestibulocular reflexes, vestibulospinal pathways, cortical functions involving perception of motion and spatial susceptibility. Also discussed are experiments regarding erythrokinetics in man and the effects of prolonged weightlessness of the humoral immune response in humans.
Enhancing Motion-In-Depth Perception of Random-Dot Stereograms.
Zhang, Di; Nourrit, Vincent; De Bougrenet de la Tocnaye, Jean-Louis
2018-07-01
Random-dot stereograms have been widely used to explore the neural mechanisms underlying binocular vision. Although they are a powerful tool to stimulate motion-in-depth (MID) perception, published results report some difficulties in the capacity to perceive MID generated by random-dot stereograms. The purpose of this study was to investigate whether the performance of MID perception could be improved using an appropriate stimulus design. Sixteen inexperienced observers participated in the experiment. A training session was carried out to improve the accuracy of MID detection before the experiment. Four aspects of stimulus design were investigated: presence of a static reference, background texture, relative disparity, and stimulus contrast. Participants' performance in MID direction discrimination was recorded and compared to evaluate whether varying these factors helped MID perception. Results showed that only the presence of background texture had a significant effect on MID direction perception. This study provides suggestions for the design of 3D stimuli in order to facilitate MID perception.
Suppressive mechanisms in visual motion processing: From perception to intelligence.
Tadin, Duje
2015-10-01
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pitch body orientation influences the perception of self-motion direction induced by optic flow.
Bourrelly, A; Vercher, J-L; Bringoux, L
2010-10-04
We studied the effect of static pitch body tilts on the perception of self-motion direction induced by a visual stimulus. Subjects were seated in front of a screen on which was projected a 3D cluster of moving dots visually simulating a forward motion of the observer with upward or downward directional biases (relative to a true earth horizontal direction). The subjects were tilted at various angles relative to gravity and were asked to estimate the direction of the perceived motion (nose-up, as during take-off or nose-down, as during landing). The data showed that body orientation proportionally affected the amount of error in the reported perceived direction (by 40% of body tilt magnitude in a range of +/-20 degrees) and these errors were systematically recorded in the direction of body tilt. As a consequence, a same visual stimulus was differently interpreted depending on body orientation. While the subjects were required to perform the task in a geocentric reference frame (i.e., relative to a gravity-related direction), they were obviously influenced by egocentric references. These results suggest that the perception of self-motion is not elaborated within an exclusive reference frame (either egocentric or geocentric) but rather results from the combined influence of both. (c) 2010 Elsevier Ireland Ltd. All rights reserved.
Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation.
Nesti, Alessandro; de Winkel, Ksander; Bülthoff, Heinrich H
2017-01-01
While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.
Motion transparency: making models of motion perception transparent.
Snowden; Verstraten
1999-10-01
In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as 'filter selection' and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.
I Dream of J.J., or Affordances and Motion Pictures.
ERIC Educational Resources Information Center
Anderson, Joseph D.
1995-01-01
Categorizes attempts to account for how viewers garner meanings from motion pictures as either semiotic, realist, or conventionalist. Proposes an alternative explanation based on J. J. Gibson's ecological theory of perception. Offers his concept of "affordances" as the key to an explanation of how meanings in motion pictures are…
Espí-López, Gemma V; Gómez-Conesa, Antonia
2014-03-01
The purpose of this study was to evaluate the efficacy of manipulative and manual therapy treatments with regard to pain perception and neck mobility in patients with tension-type headache. A randomized clinical trial was conducted on 84 adults diagnosed with tension-type headache. Eighty-four subjects were enrolled in this study: 68 women and 16 men. Mean age was 39.76 years, ranging from 18 to 65 years. A total of 57.1% were diagnosed with chronic tension-type headache and 42.9% with tension-type headache. Participants were divided into 3 treatment groups (manual therapy, manipulative therapy, a combination of manual and manipulative therapy) and a control group. Four treatment sessions were administered during 4 weeks, with posttreatment assessment and follow-up at 1 month. Cervical ranges of motion pain perception, and frequency and intensity of headaches were assessed. All 3 treatment groups showed significant improvements in the different dimensions of pain perception. Manual therapy and manipulative treatment improved some cervical ranges of motion. Headache frequency was reduced with manipulative treatment (P < .008). Combined treatment reported improvement after the treatment (P < .000) and at follow-up (P < .002). Pain intensity improved after the treatment and at follow-up with manipulative therapy (P < .01) and combined treatment (P < .01). Both treatments, administered both separately and combined together, showed efficacy for patients with tension-type headache with regard to pain perception. As for cervical ranges of motion, treatments produced greater effect when separately administered.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation
NASA Technical Reports Server (NTRS)
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
2006-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.
Stanley, James; Gowen, Emma; Miall, R. Christopher
2010-01-01
Behavioural studies suggest that the processing of movement stimuli is influenced by beliefs about the agency behind these actions. The current study examined how activity in social and action related brain areas differs when participants were instructed that identical movement stimuli were either human or computer generated. Participants viewed a series of point-light animation figures derived from motion-capture recordings of a moving actor, while functional magnetic resonance imaging (fMRI) was used to monitor patterns of neural activity. The stimuli were scrambled to produce a range of stimulus realism categories; furthermore, before each trial participants were told that they were about to view either a recording of human movement or a computer-simulated pattern of movement. Behavioural results suggested that agency instructions influenced participants' perceptions of the stimuli. The fMRI analysis indicated different functions within the paracingulate cortex: ventral paracingulate cortex was more active for human compared to computer agency instructed trials across all stimulus types, whereas dorsal paracingulate cortex was activated more highly in conflicting conditions (human instruction, low realism or vice versa). These findings support the hypothesis that ventral paracingulate encodes stimuli deemed to be of human origin, whereas dorsal paracingulate cortex is involved more in the ascertainment of human or intentional agency during the observation of ambiguous stimuli. Our results highlight the importance of prior instructions or beliefs on movement processing and the role of the paracingulate cortex in integrating prior knowledge with bottom-up stimuli. PMID:20398769
Video quality assessment using motion-compensated temporal filtering and manifold feature similarity
Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju
2017-01-01
Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.
Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A
2007-06-01
This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.
Cognitive Rehabilitation in Bilateral Vestibular Patients: A Computational Perspective.
Ellis, Andrew W; Schöne, Corina G; Vibert, Dominique; Caversaccio, Marco D; Mast, Fred W
2018-01-01
There is evidence that vestibular sensory processing affects, and is affected by, higher cognitive processes. This is highly relevant from a clinical perspective, where there is evidence for cognitive impairments in patients with peripheral vestibular deficits. The vestibular system performs complex probabilistic computations, and we claim that understanding these is important for investigating interactions between vestibular processing and cognition. Furthermore, this will aid our understanding of patients' self-motion perception and will provide useful information for clinical interventions. We propose that cognitive training is a promising way to alleviate the debilitating symptoms of patients with complete bilateral vestibular loss (BVP), who often fail to show improvement when relying solely on conventional treatment methods. We present a probabilistic model capable of processing vestibular sensory data during both passive and active self-motion. Crucially, in our model, knowledge from multiple sources, including higher-level cognition, can be used to predict head motion. This is the entry point for cognitive interventions. Despite the loss of sensory input, the processing circuitry in BVP patients is still intact, and they can still perceive self-motion when the movement is self-generated. We provide computer simulations illustrating self-motion perception of BVP patients. Cognitive training may lead to more accurate and confident predictions, which result in decreased weighting of sensory input, and thus improved self-motion perception. Using our model, we show the possible impact of cognitive interventions to help vestibular rehabilitation in patients with BVP.
Wittfoth, Matthias; Buck, Daniela; Fahle, Manfred; Herrmann, Manfred
2006-08-15
The present study aimed at characterizing the neural correlates of conflict resolution in two variations of the Simon effect. We introduced two different Simon tasks where subjects had to identify shapes on the basis of form-from-motion perception (FFMo) within a randomly moving dot field, while (1) motion direction (motion-based Simon task) or (2) stimulus location (location-based Simon task) had to be ignored. Behavioral data revealed that both types of Simon tasks induced highly significant interference effects. Using event-related fMRI, we could demonstrate that both tasks share a common cluster of activated brain regions during conflict resolution (pre-supplementary motor area (pre-SMA), superior parietal lobule (SPL), and cuneus) but also show task-specific activation patterns (left superior temporal cortex in the motion-based, and the left fusiform gyrus in the location-based Simon task). Although motion-based and location-based Simon tasks are conceptually very similar (Type 3 stimulus-response ensembles according to the taxonomy of [Kornblum, S., Stevens, G. (2002). Sequential effects of dimensional overlap: findings and issues. In: Prinz, W., Hommel., B. (Eds.), Common mechanism in perception and action. Oxford University Press, Oxford, pp. 9-54]) conflict resolution in both tasks results in the activation of different task-specific regions probably related to the different sources of task-irrelevant information. Furthermore, the present data give evidence those task-specific regions are most likely to detect the relationship between task-relevant and task-irrelevant information.
The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.
Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P
2015-01-01
Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.
Visual Cues of Motion That Trigger Animacy Perception at Birth: The Case of Self-Propulsion
ERIC Educational Resources Information Center
Di Giorgio, Elisa; Lunghi, Marco; Simion, Francesca; Vallortigara, Giorgio
2017-01-01
Self-propelled motion is a powerful cue that conveys information that an object is animate. In this case, animate refers to an entity's capacity to initiate motion without an applied external force. Sensitivity to this motion cue is present in infants that are a few months old, but whether this sensitivity is experience-dependent or is already…
1993-04-01
suggesting it occurs in later visual motion processing (long-range or second-order system). STIMULUS PERCEPT L" FLASH DURATION FLASH DURATION (a) TIME ( b ...TIME Figure 2. Gamma motion. (a) A light of fixed spatial extent is illuminated then extim- guished. ( b ) The percept is of a light expanding and then...while smaller, type- B cells provide input to its parvocellular subdivision. From here the magnocellular pathway progresses up through visual cortex area V
Visually Guided Control of Movement
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Editor); Kaiser, Mary K. (Editor)
1991-01-01
The papers given at an intensive, three-week workshop on visually guided control of movement are presented. The participants were researchers from academia, industry, and government, with backgrounds in visual perception, control theory, and rotorcraft operations. The papers included invited lectures and preliminary reports of research initiated during the workshop. Three major topics are addressed: extraction of environmental structure from motion; perception and control of self motion; and spatial orientation. Each topic is considered from both theoretical and applied perspectives. Implications for control and display are suggested.
Numerical simulation of human orientation perception during lunar landing
NASA Astrophysics Data System (ADS)
Clark, Torin K.; Young, Laurence R.; Stimpson, Alexander J.; Duda, Kevin R.; Oman, Charles M.
2011-09-01
In lunar landing it is necessary to select a suitable landing point and then control a stable descent to the surface. In manned landings, astronauts will play a critical role in monitoring systems and adjusting the descent trajectory through either supervisory control and landing point designations, or by direct manual control. For the astronauts to ensure vehicle performance and safety, they will have to accurately perceive vehicle orientation. A numerical model for human spatial orientation perception was simulated using input motions from lunar landing trajectories to predict the potential for misperceptions. Three representative trajectories were studied: an automated trajectory, a landing point designation trajectory, and a challenging manual control trajectory. These trajectories were studied under three cases with different cues activated in the model to study the importance of vestibular cues, visual cues, and the effect of the descent engine thruster creating dust blowback. The model predicts that spatial misperceptions are likely to occur as a result of the lunar landing motions, particularly with limited or incomplete visual cues. The powered descent acceleration profile creates a somatogravic illusion causing the astronauts to falsely perceive themselves and the vehicle as upright, even when the vehicle has a large pitch or roll angle. When visual pathways were activated within the model these illusions were mostly suppressed. Dust blowback, obscuring the visual scene out the window, was also found to create disorientation. These orientation illusions are likely to interfere with the astronauts' ability to effectively control the vehicle, potentially degrading performance and safety. Therefore suitable countermeasures, including disorientation training and advanced displays, are recommended.
Exploiting core knowledge for visual object recognition.
Schurgin, Mark W; Flombaum, Jonathan I
2017-03-01
Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Incorporating Animation Concepts and Principles in STEM Education
ERIC Educational Resources Information Center
Harrison, Henry L., III; Hummell, Laura J.
2010-01-01
Animation is the rapid display of a sequence of static images that creates the illusion of movement. This optical illusion is often called perception of motion, persistence of vision, illusion of motion, or short-range apparent motion. The phenomenon occurs when the eye is exposed to rapidly changing still images, with each image being changed…
Visual motion detection and habitat preference in Anolis lizards.
Steinberg, David S; Leal, Manuel
2016-11-01
The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.
Congiu, Sara; Schlottmann, Anne; Ray, Elizabeth
2010-01-01
We investigated perception of social and physical causality and animacy in simple motion events, for high-functioning children with autism (CA = 13, VMA = 9.6). Children matched 14 different animations to pictures showing physical, social or non-causality. In contrast to previous work, children with autism performed at a high level similar to VMA-matched controls, recognizing physical causality in launch and social causality in reaction events. The launch deficit previously found in younger children with autism, possibly related to attentional/verbal difficulties, is apparently overcome with age. Some events involved squares moving non-rigidly, like animals. Children with autism had difficulties recognizing this, extending the biological motion literature. However, animacy prompts amplified their attributions of social causality. Thus children with autism may overcome their animacy perception deficit strategically.
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
Self-motion perception: assessment by computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.
1998-01-01
The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.
Terminator Disparity Contributes to Stereo Matching for Eye Movements and Perception
Optican, Lance M.; Cumming, Bruce G.
2013-01-01
In the context of motion detection, the endings (or terminators) of 1-D features can be detected as 2-D features, affecting the perceived direction of motion of the 1-D features (the barber-pole illusion) and the direction of tracking eye movements. In the realm of binocular disparity processing, an equivalent role for the disparity of terminators has not been established. Here we explore the stereo analogy of the barber-pole stimulus, applying disparity to a 1-D noise stimulus seen through an elongated, zero-disparity, aperture. We found that, in human subjects, these stimuli induce robust short-latency reflexive vergence eye movements, initially in the direction orthogonal to the 1-D features, but shortly thereafter in the direction predicted by the disparity of the terminators. In addition, these same stimuli induce vivid depth percepts, which can only be attributed to the disparity of line terminators. When the 1-D noise patterns are given opposite contrast in the two eyes (anticorrelation), both components of the vergence response reverse sign. Finally, terminators drive vergence even when the aperture is defined by a texture (as opposed to a contrast) boundary. These findings prove that terminators contribute to stereo matching, and constrain the type of neuronal mechanisms that might be responsible for the detection of terminator disparity. PMID:24285893
Terminator disparity contributes to stereo matching for eye movements and perception.
Quaia, Christian; Optican, Lance M; Cumming, Bruce G
2013-11-27
In the context of motion detection, the endings (or terminators) of 1-D features can be detected as 2-D features, affecting the perceived direction of motion of the 1-D features (the barber-pole illusion) and the direction of tracking eye movements. In the realm of binocular disparity processing, an equivalent role for the disparity of terminators has not been established. Here we explore the stereo analogy of the barber-pole stimulus, applying disparity to a 1-D noise stimulus seen through an elongated, zero-disparity, aperture. We found that, in human subjects, these stimuli induce robust short-latency reflexive vergence eye movements, initially in the direction orthogonal to the 1-D features, but shortly thereafter in the direction predicted by the disparity of the terminators. In addition, these same stimuli induce vivid depth percepts, which can only be attributed to the disparity of line terminators. When the 1-D noise patterns are given opposite contrast in the two eyes (anticorrelation), both components of the vergence response reverse sign. Finally, terminators drive vergence even when the aperture is defined by a texture (as opposed to a contrast) boundary. These findings prove that terminators contribute to stereo matching, and constrain the type of neuronal mechanisms that might be responsible for the detection of terminator disparity.
NASA Astrophysics Data System (ADS)
Mirkia, Hasti; Sangari, Arash; Nelson, Mark; Assadi, Amir H.
2013-03-01
Architecture brings together diverse elements to enhance the observer's measure of esthetics and the convenience of functionality. Architects often conceptualize synthesis of design elements to invoke the observer's sense of harmony and positive affect. How does an observer's brain respond to harmony of design in interior spaces? One implicit consideration by architects is the role of guided visual attention by observers while navigating indoors. Prior visual experience of natural scenes provides the perceptual basis for Gestalt of design elements. In contrast, Gestalt of organization in design varies according to the architect's decision. We outline a quantitative theory to measure the success in utilizing the observer's psychological factors to achieve the desired positive affect. We outline a unified framework for perception of geometry and motion in interior spaces, which integrates affective and cognitive aspects of human vision in the context of anthropocentric interior design. The affective criteria are derived from contemporary theories of interior design. Our contribution is to demonstrate that the neural computations in an observer's eye movement could be used to elucidate harmony in perception of form, space and motion, thus a measure of goodness of interior design. Through mathematical modeling, we argue the plausibility of the relevant hypotheses.
Path perception during rotation: influence of instructions, depth range, and dot density
NASA Technical Reports Server (NTRS)
Li, Li; Warren, William H Jr
2004-01-01
How do observers perceive their direction of self-motion when traveling on a straight path while their eyes are rotating? Our previous findings suggest that information from retinal flow and extra-retinal information about eye movements are each sufficient to solve this problem for both perception and active control of self-motion [Vision Res. 40 (2000) 3873; Psych. Sci. 13 (2002) 485]. In this paper, using displays depicting translation with simulated eye rotation, we investigated how task variables such as instructions, depth range, and dot density influenced the visual system's reliance on retinal vs. extra-retinal information for path perception during rotation. We found that path errors were small when observers expected to travel on a straight path or with neutral instructions, but errors increased markedly when observers expected to travel on a curved path. Increasing depth range or dot density did not improve path judgments. We conclude that the expectation of the shape of an upcoming path can influence the interpretation of the ambiguous retinal flow. A large depth range and dense motion parallax are not essential for accurate path perception during rotation, but reference objects and a large field of view appear to improve path judgments.
Lee, Hannah; Kim, Jejoong
2017-06-01
It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.
Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan
2015-01-01
Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: 'element motion' (EM) or 'group motion' (GM). In "EM," the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in "GM," both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50-230 ms) in the long glide was perceived to be shorter than that within both the short glide and the 'gap-transfer' auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role.
Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice
2016-11-01
Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.
Motion parallax in immersive cylindrical display systems
NASA Astrophysics Data System (ADS)
Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.
2012-03-01
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.
Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.
2011-01-01
Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035
A neural basis for the spatial suppression of visual motion perception
Liu, Liu D; Haefner, Ralf M; Pack, Christopher C
2016-01-01
In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283
Algorithms and architectures for robot vision
NASA Technical Reports Server (NTRS)
Schenker, Paul S.
1990-01-01
The scope of the current work is to develop practical sensing implementations for robots operating in complex, partially unstructured environments. A focus in this work is to develop object models and estimation techniques which are specific to requirements of robot locomotion, approach and avoidance, and grasp and manipulation. Such problems have to date received limited attention in either computer or human vision - in essence, asking not only how perception is in general modeled, but also what is the functional purpose of its underlying representations. As in the past, researchers are drawing on ideas from both the psychological and machine vision literature. Of particular interest is the development 3-D shape and motion estimates for complex objects when given only partial and uncertain information and when such information is incrementally accrued over time. Current studies consider the use of surface motion, contour, and texture information, with the longer range goal of developing a fused sensing strategy based on these sources and others.