Sample records for visual field motion

  1. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  2. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Sunglasses with thick temples and frame constrict temporal visual field.

    PubMed

    Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric

    2013-12-01

    Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p < 0.05 were considered significant. A glare test was done using a surgical lighting system pointed at the eye(s) at different incidence angles. No significant "base visual field" or "eye motion visual field" surface area variations were noted when comparing tests done without glasses and with the "thin sunglasses." In contrast, a 22% "eye motion visual field" surface area decrease (p < 0.001) was noted when comparing tests done without glasses and with "thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p < 0.001). All subjects reported less lateral glare with the "thick sunglasses" than with the "thin sunglasses" (p < 0.001). The better protection from lateral glare offered by "thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.

  4. Illusory motion reversal is caused by rivalry, not by perceptual snapshots of the visual field.

    PubMed

    Kline, Keith; Holcombe, Alex O; Eagleman, David M

    2004-10-01

    In stroboscopic conditions--such as motion pictures--rotating objects may appear to rotate in the reverse direction due to under-sampling (aliasing). A seemingly similar phenomenon occurs in constant sunlight, which has been taken as evidence that the visual system processes discrete "snapshots" of the outside world. But if snapshots are indeed taken of the visual field, then when a rotating drum appears to transiently reverse direction, its mirror image should always appeared to reverse direction simultaneously. Contrary to this hypothesis, we found that when observers watched a rotating drum and its mirror image, almost all illusory motion reversals occurred for only one image at a time. This result indicates that the motion reversal illusion cannot be explained by snapshots of the visual field. The same result is found when the two images are presented within one visual hemifield, further ruling out the possibility that discrete sampling of the visual field occurs separately in each hemisphere. The frequency distribution of illusory reversal durations approximates a gamma distribution, suggesting perceptual rivalry as a better explanation for illusory motion reversal. After adaptation of motion detectors coding for the correct direction, the activity of motion-sensitive neurons coding for motion in the reverse direction may intermittently become dominant and drive the perception of motion.

  5. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  6. Non-Verbal IQ Is Correlated with Visual Field Advantages for Short Duration Coherent Motion Detection in Deaf Signers with Varied ASL Exposure and Etiologies of Deafness

    ERIC Educational Resources Information Center

    Samar, Vincent J.; Parasnis, Ila

    2007-01-01

    Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. "Brain and Cognition," 49, 170-181; Samar, V. J., & Parasnis, I. (2005).…

  7. Anisotropies in the perceived spatial displacement of motion-defined contours: opposite biases in the upper-left and lower-right visual quadrants.

    PubMed

    Fan, Zhao; Harris, John

    2010-10-12

    In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Shift in speed selectivity of visual cortical neurons: A neural basis of perceived motion contrast

    PubMed Central

    Li, Chao-Yi; Lei, Jing-Jiang; Yao, Hai-Shan

    1999-01-01

    The perceived speed of motion in one part of the visual field is influenced by the speed of motion in its surrounding fields. Little is known about the cellular mechanisms causing this phenomenon. Recordings from mammalian visual cortex revealed that speed preference of the cortical cells could be changed by displaying a contrast speed in the field surrounding the cell’s classical receptive field. The neuron’s selectivity shifted to prefer faster speed if the contextual surround motion was set at a relatively lower speed, and vice versa. These specific center–surround interactions may underlie the perceptual enhancement of speed contrast between adjacent fields. PMID:10097161

  9. Change of temporal-order judgment of sounds during long-lasting exposure to large-field visual motion.

    PubMed

    Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki

    2008-01-01

    The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.

  10. Effects of attention and laterality on motion and orientation discrimination in deaf signers.

    PubMed

    Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R

    2013-06-01

    Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.

  11. Characterizing head motion in three planes during combined visual and base of support disturbances in healthy and visually sensitive subjects.

    PubMed

    Keshner, E A; Dhaher, Y

    2008-07-01

    Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.

  12. Application of virtual reality graphics in assessment of concussion.

    PubMed

    Slobounov, Semyon; Slobounov, Elena; Newell, Karl

    2006-04-01

    Abnormal balance in individuals suffering from traumatic brain injury (TBI) has been documented in numerous recent studies. However, specific mechanisms causing balance deficits have not been systematically examined. This paper demonstrated the destabilizing effect of visual field motion, induced by virtual reality graphics in concussed individuals but not in normal controls. Fifty five student-athletes at risk for concussion participated in this study prior to injury and 10 of these subjects who suffered MTBI were tested again on day 3, day 10, and day 30 after the incident. Postural responses to visual field motion were recorded using a virtual reality (VR) environment in conjunction with balance (AMTI force plate) and motion tracking (Flock of Birds) technologies. Two experimental conditions were introduced where subjects passively viewed VR scenes or actively manipulated the visual field motion. Long-lasting destabilizing effects of visual field motion were revealed, although subjects were asymptomatic when standard balance tests were introduced. The findings demonstrate that advanced VR technology may detect residual symptoms of concussion at least 30 days post-injury.

  13. Visual motion combined with base of support width reveals variable field dependency in healthy young adults.

    PubMed

    Streepey, Jefferson W; Kenyon, Robert V; Keshner, Emily A

    2007-01-01

    We previously reported responses to induced postural instability in young healthy individuals viewing visual motion with a narrow (25 degrees in both directions) and wide (90 degrees and 55 degrees in the horizontal and vertical directions) field of view (FOV) as they stood on different sized blocks. Visual motion was achieved using an immersive virtual environment that moved realistically with head motion (natural motion) and translated sinusoidally at 0.1 Hz in the fore-aft direction (augmented motion). We observed that a subset of the subjects (steppers) could not maintain continuous stance on the smallest block when the virtual environment was in motion. We completed a posteriori analyses on the postural responses of the steppers and non-steppers that may inform us about the mechanisms underlying these differences in stability. We found that when viewing augmented motion with a wide FOV, there was a greater effect on the head and whole body center of mass and ankle angle root mean square (RMS) values of the steppers than of the non-steppers. FFT analyses revealed greater power at the frequency of the visual stimulus in the steppers compared to the non-steppers. Whole body COM time lags relative to the augmented visual scene revealed that the time-delay between the scene and the COM was significantly increased in the steppers. The increased responsiveness to visual information suggests a greater visual field-dependency of the steppers and suggests that the thresholds for shifting from a reliance on visual information to somatosensory information can differ even within a healthy population.

  14. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  15. Characterizing Head Motion in 3 Planes during Combined Visual and Base of Support Disturbances in Healthy and Visually Sensitive Subjects

    PubMed Central

    Keshner, E.A.; Dhaher, Y.

    2008-01-01

    Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29–31 years) and 3 visually sensitive (27–57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a 3-dimensional model of joint motion11 was developed to examine gross head motion in 3 planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field can modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms. PMID:18162402

  16. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    PubMed

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Haltere mechanosensory influence on tethered flight behavior in Drosophila.

    PubMed

    Mureli, Shwetha; Fox, Jessica L

    2015-08-01

    In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.

  18. Correction of respiratory motion for IMRT using aperture adaptive technique and visual guidance: A feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi

    2007-07-01

    Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.

  19. Effects of Spatial Attention on Motion Discrimination are Greater in the Left than Right Visual Field

    PubMed Central

    Bosworth, Rain G.; Petrich, Jennifer A.; Dobkins, Karen R.

    2012-01-01

    In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream. PMID:22051893

  20. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  1. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  2. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  3. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    PubMed Central

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214

  5. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  6. Driving with visual field loss : an exploratory simulation study

    DOT National Transportation Integrated Search

    2009-01-01

    The goal of this study was to identify the influence of peripheral visual field loss (VFL) on driving performance in a motion-based driving simulator. Sixteen drivers (6 with VFL and 10 with normal visual fields) completed a 14 km simulated drive. Th...

  7. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    PubMed

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  8. Directional asymmetries in human smooth pursuit eye movements.

    PubMed

    Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam

    2013-06-27

    Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.

  9. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  10. Local statistics of retinal optic flow for self-motion through natural sceneries.

    PubMed

    Calow, Dirk; Lappe, Markus

    2007-12-01

    Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.

  11. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    PubMed

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  12. Role of inter-hemispheric transfer in generating visual evoked potentials in V1-damaged brain hemispheres

    PubMed Central

    Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.

    2015-01-01

    Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450

  13. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  14. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish.

    PubMed

    Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi

    2018-06-05

    Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.

  15. Vestibular nuclei and cerebellum put visual gravitational motion in context.

    PubMed

    Miller, William L; Maffei, Vincenzo; Bosco, Gianfranco; Iosa, Marco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco

    2008-04-01

    Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.

  16. Close similarity between spatiotemporal frequency tunings of human cortical responses and involuntary manual following responses to visual motion.

    PubMed

    Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki

    2009-02-01

    Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.

  17. Whole-field visual motion drives swimming in larval zebrafish via a stochastic process

    PubMed Central

    Portugues, Ruben; Haesemeyer, Martin; Blum, Mirella L.; Engert, Florian

    2015-01-01

    ABSTRACT Caudo-rostral whole-field visual motion elicits forward locomotion in many organisms, including larval zebrafish. Here, we investigate the dependence on the latency to initiate this forward swimming as a function of the speed of the visual motion. We show that latency is highly dependent on speed for slow speeds (<10 mm s−1) and then plateaus for higher values. Typical latencies are >1.5 s, which is much longer than neuronal transduction processes. What mechanisms underlie these long latencies? We propose two alternative, biologically inspired models that could account for this latency to initiate swimming: an integrate and fire model, which is history dependent, and a stochastic Poisson model, which has no history dependence. We use these models to predict the behavior of larvae when presented with whole-field motion of varying speed and find that the stochastic process shows better agreement with the experimental data. Finally, we discuss possible neuronal implementations of these models. PMID:25792753

  18. Whole-field visual motion drives swimming in larval zebrafish via a stochastic process.

    PubMed

    Portugues, Ruben; Haesemeyer, Martin; Blum, Mirella L; Engert, Florian

    2015-05-01

    Caudo-rostral whole-field visual motion elicits forward locomotion in many organisms, including larval zebrafish. Here, we investigate the dependence on the latency to initiate this forward swimming as a function of the speed of the visual motion. We show that latency is highly dependent on speed for slow speeds (<10 mm s(-1)) and then plateaus for higher values. Typical latencies are >1.5 s, which is much longer than neuronal transduction processes. What mechanisms underlie these long latencies? We propose two alternative, biologically inspired models that could account for this latency to initiate swimming: an integrate and fire model, which is history dependent, and a stochastic Poisson model, which has no history dependence. We use these models to predict the behavior of larvae when presented with whole-field motion of varying speed and find that the stochastic process shows better agreement with the experimental data. Finally, we discuss possible neuronal implementations of these models. © 2015. Published by The Company of Biologists Ltd.

  19. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  20. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    PubMed Central

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  1. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    PubMed

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  2. A Class of Visual Neurons with Wide-Field Properties Is Required for Local Motion Detection.

    PubMed

    Fisher, Yvette E; Leong, Jonathan C S; Sporar, Katja; Ketkar, Madhura D; Gohl, Daryl M; Clandinin, Thomas R; Silies, Marion

    2015-12-21

    Visual motion cues are used by many animals to guide navigation across a wide range of environments. Long-standing theoretical models have made predictions about the computations that compare light signals across space and time to detect motion. Using connectomic and physiological approaches, candidate circuits that can implement various algorithmic steps have been proposed in the Drosophila visual system. These pathways connect photoreceptors, via interneurons in the lamina and the medulla, to direction-selective cells in the lobula and lobula plate. However, the functional architecture of these circuits remains incompletely understood. Here, we use a forward genetic approach to identify the medulla neuron Tm9 as critical for motion-evoked behavioral responses. Using in vivo calcium imaging combined with genetic silencing, we place Tm9 within motion-detecting circuitry. Tm9 receives functional inputs from the lamina neurons L3 and, unexpectedly, L1 and passes information onto the direction-selective T5 neuron. Whereas the morphology of Tm9 suggested that this cell would inform circuits about local points in space, we found that the Tm9 spatial receptive field is large. Thus, this circuit informs elementary motion detectors about a wide region of the visual scene. In addition, Tm9 exhibits sustained responses that provide a tonic signal about incoming light patterns. Silencing Tm9 dramatically reduces the response amplitude of T5 neurons under a broad range of different motion conditions. Thus, our data demonstrate that sustained and wide-field signals are essential for elementary motion processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  4. Non-verbal IQ is correlated with visual field advantages for short duration coherent motion detection in deaf signers with varied ASL exposure and etiologies of deafness.

    PubMed

    Samar, Vincent J; Parasnis, Ila

    2007-12-01

    Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49, 170-181; Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: Correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported a small, non-significant RVF advantage for deaf signers when short duration motion stimuli were used (200-250 ms). Samar and Parasnis (2005) reported that this small RVF advantage became significant when non-verbal IQ was statistically controlled. This paper presents extended analyses of the correlation between non-verbal IQ and visual field asymmetries in the data set of Samar and Parasnis (2005). We speculate that this correlation might plausibly be driven by individual differences either in age of acquisition of American Sign Language (ASL) or in the degree of neurodevelopmental insult associated with various etiologies of deafness. Limited additional analyses are presented that indicate a need for further research on the cause of this apparent IQ-laterality relationship. Some potential implications of this relationship for lateralization studies of deaf signers are discussed. Controlling non-verbal IQ may improve the reliability of short duration coherent motion tasks to detect adaptive dorsal stream lateralization due to exposure to ASL in deaf research participants.

  5. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  6. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  7. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  8. Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI

    PubMed Central

    Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer

    2016-01-01

    There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673

  9. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  10. Insect Detection of Small Targets Moving in Visual Clutter

    PubMed Central

    Barnett, Paul D; O'Carroll, David C

    2006-01-01

    Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly,Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. PMID:16448249

  11. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  12. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  13. Modeling and visualization of carrier motion in organic films by optical second harmonic generation and Maxwell-displacement current

    NASA Astrophysics Data System (ADS)

    Iwamoto, Mitsumasa; Manaka, Takaaki; Taguchi, Dai

    2015-09-01

    The probing and modeling of carrier motions in materials as well as in electronic devices is a fundamental research subject in science and electronics. According to the Maxwell electromagnetic field theory, carriers are a source of electric field. Therefore, by probing the dielectric polarization caused by the electric field arising from moving carriers and dipoles, we can find a way to visualize the carrier motions in materials and in devices. The techniques used here are an electrical Maxwell-displacement current (MDC) measurement and a novel optical method based on the electric field induced optical second harmonic generation (EFISHG) measurement. The MDC measurement probes changes of induced charge on electrodes, while the EFISHG probes nonlinear polarization induced in organic active layers due to the coupling of electron clouds of molecules and electro-magnetic waves of an incident laser beam in the presence of a DC field caused by electrons and holes. Both measurements allow us to probe dynamical carrier motions in solids through the detection of dielectric polarization phenomena originated from dipolar motions and electron transport. In this topical review, on the basis of Maxwell’s electro-magnetism theory of 1873, which stems from Faraday’s idea, the concept for probing electron and hole transport in solids by using the EFISHG is discussed in comparison with the conventional time of flight (TOF) measurement. We then visualize carrier transit in organic devices, i.e. organic field effect transistors, organic light emitting diodes, organic solar cells, and others. We also show that visualizing an EFISHG microscopic image is a novel way for characterizing anisotropic carrier transport in organic thin films. We also discuss the concept of the detection of rotational dipolar motions in monolayers by means of the MDC measurement, which is capable of probing the change of dielectric spontaneous polarization formed by dipoles in organic monolayers. Finally we conclude that the ideas and experiments on EFISHG and MDC lead to a novel way of analyzing dynamical motions of electrons, holes, and dipoles in solids, and thus are available in organic electronic device application.

  14. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  15. Motion-form interactions beyond the motion integration level: evidence for interactions between orientation and optic flow signals.

    PubMed

    Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George

    2013-05-31

    Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted.

  16. Filling gaps in visual motion for target capture

    PubMed Central

    Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637

  17. Filling gaps in visual motion for target capture.

    PubMed

    Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.

  18. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  19. Impaired Velocity Processing Reveals an Agnosia for Motion in Depth.

    PubMed

    Barendregt, Martijn; Dumoulin, Serge O; Rokers, Bas

    2016-11-01

    Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers' ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion. © The Author(s) 2016.

  20. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras

    PubMed Central

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-01

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144

  1. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    PubMed

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  2. Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco

    2013-05-01

    Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Repeatability of automated perimetry: a comparison between standard automated perimetry with stimulus size III and V, matrix, and motion perimetry.

    PubMed

    Wall, Michael; Woodward, Kimberly R; Doyle, Carrie K; Artes, Paul H

    2009-02-01

    Standard automated perimetry (SAP) shows a marked increase in variability in damaged areas of the visual field. This study was conducted to test the hypothesis that larger stimuli are associated with more uniform variability, by investigating the retest variability of four perimetry tests: standard automated perimetry size III (SAP III), with the SITA standard strategy; SAP size V (SAP V), with the full-threshold strategy; Matrix (FDT II), and Motion perimetry. One eye each of 120 patients with glaucoma was examined on the same day with these four perimetric tests and retested 1 to 8 weeks later. The decibel scales were adjusted to make the test's scales numerically similar. Retest variability was examined by establishing the distributions of retest threshold estimates, for each threshold level observed at the first test. The 5th and 95th percentiles of the retest distribution were used as point-wise limits of retest variability. Regression analyses were performed to quantify the relationship between visual field sensitivity and variability. With SAP III, the retest variability increased substantially with reducing sensitivity. Corresponding increases with SAP V, Matrix, and Motion perimetry were considerably smaller or absent. With SAP III, sensitivity explained 22% of the retest variability (r(2)), whereas corresponding data for SAP V, Matrix, and Motion perimetry were 12%, 2%, and 2%, respectively. Variability of Matrix and Motion perimetry does not increase as substantially as that of SAP III in damaged areas of the visual field. Increased sampling with the larger stimuli of these techniques is the likely explanation for this finding. These properties may make these stimuli excellent candidates for early detection of visual field progression.

  5. Motion-form interactions beyond the motion integration level: Evidence for interactions between orientation and optic flow signals

    PubMed Central

    Pavan, Andrea; Marotti, Rosilari Bellacosa; Mather, George

    2013-01-01

    Motion and form encoding are closely coupled in the visual system. A number of physiological studies have shown that neurons in the striate and extrastriate cortex (e.g., V1 and MT) are selective for motion direction parallel to their preferred orientation, but some neurons also respond to motion orthogonal to their preferred spatial orientation. Recent psychophysical research (Mather, Pavan, Bellacosa, & Casco, 2012) has demonstrated that the strength of adaptation to two fields of transparently moving dots is modulated by simultaneously presented orientation signals, suggesting that the interaction occurs at the level of motion integrating receptive fields in the extrastriate cortex. In the present psychophysical study, we investigated whether motion-form interactions take place at a higher level of neural processing where optic flow components are extracted. In Experiment 1, we measured the duration of the motion aftereffect (MAE) generated by contracting or expanding dot fields in the presence of either radial (parallel) or concentric (orthogonal) counterphase pedestal gratings. To tap the stage at which optic flow is extracted, we measured the duration of the phantom MAE (Weisstein, Maguire, & Berbaum, 1977) in which we adapted and tested different parts of the visual field, with orientation signals presented either in the adapting (Experiment 2) or nonadapting (Experiments 3 and 4) sectors. Overall, the results showed that motion adaptation is suppressed most by orientation signals orthogonal to optic flow direction, suggesting that motion-form interactions also take place at the global motion level where optic flow is extracted. PMID:23729767

  6. Flight-path estimation in passive low-altitude flight by visual cues

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kohn, S.

    1993-01-01

    A series of experiments was conducted, in which subjects had to estimate the flight path while passively being flown in straight or in curved motion over several types of nominally flat, textured terrain. Three computer-generated terrain types were investigated: (1) a random 'pole' field, (2) a flat field consisting of random rectangular patches, and (3) a field of random parallelepipeds. Experimental parameters were the velocity-to-height (V/h) ratio, the viewing distance, and the terrain type. Furthermore, the effect of obscuring parts of the visual field was investigated. Assumptions were made about the basic visual-field information by analyzing the pattern of line-of-sight (LOS) rate vectors in the visual field. The experimental results support these assumptions and show that, for both a straight as well as a curved flight path, the estimation accuracy and estimation times improve with the V/h ratio. Error scores for the curved flight path are found to be about 3 deg in visual angle higher than for the straight flight path, and the sensitivity to the V/h ratio is found to be considerably larger. For the straight motion, the flight path could be estimated successfully from local areas in the far field. Curved flight-path estimates have to rely on the entire LOS rate pattern.

  7. Interactions between motion and form processing in the human visual system.

    PubMed

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.

  8. Interactions between motion and form processing in the human visual system

    PubMed Central

    Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara

    2013-01-01

    The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286

  9. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-04-14

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  10. Visuomotor adaptation to a visual rotation is gravity dependent.

    PubMed

    Toma, Simone; Sciutti, Alessandra; Papaxanthis, Charalambos; Pozzo, Thierry

    2015-03-15

    Humans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90° visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure. Copyright © 2015 the American Physiological Society.

  11. Postural and Spatial Orientation Driven by Virtual Reality

    PubMed Central

    Keshner, Emily A.; Kenyon, Robert V.

    2009-01-01

    Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796

  12. The effect of internal and external fields of view on visually induced motion sickness.

    PubMed

    Bos, Jelte E; de Vries, Sjoerd C; van Emmerik, Martijn L; Groen, Eric L

    2010-07-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between iFOV and eFOV would lead to sickness. To that end we used a computer game environment with different iFOV and eFOV settings, and found the opposite effect. We speculate that the relative large differences between iFOV and eFOV used in this experiment caused the discrepancy, as may be explained by assuming an observer model controlling body motion. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Perceived change in orientation from optic flow in the central visual field

    NASA Technical Reports Server (NTRS)

    Dyre, Brian P.; Andersen, George J.

    1988-01-01

    The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.

  14. Self-Management of Patient Body Position, Pose, and Motion Using Wide-Field, Real-Time Optical Measurement Feedback: Results of a Volunteer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, James M.; Price, Gareth J., E-mail: gareth.price@christie.nhs.uk; Faculty of Medical and Human Sciences, Manchester Academic Health Sciences Centre, University of Manchester, Manchester

    2013-12-01

    Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup posemore » and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies.« less

  15. Vection: the contributions of absolute and relative visual motion.

    PubMed

    Howard, I P; Howard, A

    1994-01-01

    Inspection of a visual scene rotating about the vertical body axis induces a compelling sense of self rotation, or circular vection. Circular vection is suppressed by stationary objects seen beyond the moving display but not by stationary objects in the foreground. We hypothesised that stationary objects in the foreground facilitate vection because they introduce a relative-motion signal into what would otherwise be an absolute-motion signal. Vection latency and magnitude were measured with a full-field moving display and with stationary objects of various sizes and at various positions in the visual field. The results confirmed the hypothesis. Vection latency was longer when there were no stationary objects in view than when stationary objects were in view. The effect of stationary objects was particularly evident at low stimulus velocities. At low velocities a small stationary point significantly increased vection magnitude in spite of the fact that, at higher stimulus velocities and with other stationary objects in view, fixation on a stationary point, if anything, reduced vection. Changing the position of the stationary objects in the field of view did not affect vection latencies or magnitudes.

  16. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  17. Impaired Activation of Visual Attention Network for Motion Salience Is Accompanied by Reduced Functional Connectivity between Frontal Eye Fields and Visual Cortex in Strabismic Amblyopia

    PubMed Central

    Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin

    2017-01-01

    Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional connectivity with visual cortical nodes during the motion salience task through the amblyopic eye, despite suprathreshold detection performance. This suggests that the reduced ability of the amblyopic eye to activate the frontal components of the attention networks may help explain the aberrant control of visual attention and eye movements in amblyopes. PMID:28484381

  18. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  19. Visual recovery in cortical blindness is limited by high internal noise

    PubMed Central

    Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.

    2015-01-01

    Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544

  20. Real-time monitoring and visualization of the multi-dimensional motion of an anisotropic nanoparticle

    NASA Astrophysics Data System (ADS)

    Go, Gi-Hyun; Heo, Seungjin; Cho, Jong-Hoi; Yoo, Yang-Seok; Kim, Minkwan; Park, Chung-Hyun; Cho, Yong-Hoon

    2017-03-01

    As interest in anisotropic particles has increased in various research fields, methods of tracking such particles have become increasingly desirable. Here, we present a new and intuitive method to monitor the Brownian motion of a nanowire, which can construct and visualize multi-dimensional motion of a nanowire confined in an optical trap, using a dual particle tracking system. We measured the isolated angular fluctuations and translational motion of the nanowire in the optical trap, and determined its physical properties, such as stiffness and torque constants, depending on laser power and polarization direction. This has wide implications in nanoscience and nanotechnology with levitated anisotropic nanoparticles.

  1. Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.

    PubMed

    Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan

    2016-12-01

    The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.

  2. Visual motion integration by neurons in the middle temporal area of a New World monkey, the marmoset

    PubMed Central

    Solomon, Selina S; Tailby, Chris; Gharaei, Saba; Camp, Aaron J; Bourne, James A; Solomon, Samuel G

    2011-01-01

    Abstract The middle temporal area (MT/V5) is an anatomically distinct region of primate visual cortex that is specialized for the processing of image motion. It is generally thought that some neurons in area MT are capable of signalling the motion of complex patterns, but this has only been established in the macaque monkey. We made extracellular recordings from single units in area MT of anaesthetized marmosets, a New World monkey. We show through quantitative analyses that some neurons (35 of 185; 19%) are capable of signalling pattern motion (‘pattern cells’). Across several dimensions, the visual response of pattern cells in marmosets is indistinguishable from that of pattern cells in macaques. Other neurons respond to the motion of oriented contours in a pattern (‘component cells’) or show intermediate properties. In addition, we encountered a subset of neurons (22 of 185; 12%) insensitive to sinusoidal gratings but very responsive to plaids and other two-dimensional patterns and otherwise indistinguishable from pattern cells. We compared the response of each cell class to drifting gratings and dot fields. In pattern cells, directional selectivity was similar for gratings and dot fields; in component cells, directional selectivity was weaker for dot fields than gratings. Pattern cells were more likely to have stronger suppressive surrounds, prefer lower spatial frequencies and prefer higher speeds than component cells. We conclude that pattern motion sensitivity is a feature of some neurons in area MT of both New and Old World monkeys, suggesting that this functional property is an important stage in motion analysis and is likely to be conserved in humans. PMID:21946851

  3. Is it just motion that silences awareness of other visual changes?

    PubMed

    Peirce, Jonathan W

    2013-06-28

    When an array of visual elements is changing color, size, or shape incoherently, the changes are typically quite visible even when the overall color, size, or shape statistics of the field may not have changed. When the dots also move, however, the changes become much less apparent; awareness of them is "silenced" (Suchow & Alvarez, 2011). This finding might indicate that the perception of motion is of particular importance to the visual system, such that it is given priority in processing over other forms of visual change. Here we test whether that is the case by examining the converse: whether awareness of motion signals can be silenced by potent coherent changes in color or size. We find that they can, and with very similar effects, indicating that motion is not critical for silencing. Suchow and Alvarez's dots always moved in the same direction with the same speed, causing them to be grouped as a single entity. We also tested whether this coherence was a necessary component of the silencing effect. It is not; when the dot speeds are randomly selected, such that no coherent motion is present, the silencing effect remains. It is clear that neither motion nor grouping is directly responsible for the silencing effect. Silencing can be generated from any potent visual change.

  4. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  5. Steering microtubule shuttle transport with dynamically controlled magnetic fields

    DOE PAGES

    Mahajan, K. D.; Ruan, G.; Dorcéna, C. J.; ...

    2016-03-23

    Nanoscale control of matter is critical to the design of integrated nanosystems. Here, we describe a method to dynamically control directionality of microtubule (MT) motion using programmable magnetic fields. MTs are combined with magnetic quantum dots (i.e., MagDots) that are manipulated by external magnetic fields provided by magnetic nanowires. MT shuttles thus undergo both ATP-driven and externally-directed motion with a fluorescence component that permits simultaneous visualization of shuttle motion. This technology is used to alter the trajectory of MTs in motion and to pin MT motion. Ultimately, such an approach could be used to evaluate the MT-kinesin transport system andmore » could serve as the basis for improved lab-on-a-chip technologies based on MT transport.« less

  6. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  7. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning

    PubMed Central

    Larcombe, Stephanie J.; Kennard, Chris

    2017-01-01

    Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815

  8. Visual and motion cueing in helicopter simulation

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1985-01-01

    Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.

  9. Motion of a Charged Particle in a Constant and Uniform Electromagnetic Field

    ERIC Educational Resources Information Center

    Ladino, L. A.; Rondón, S. H.; Orduz, P.

    2015-01-01

    This paper focuses on the use of software developed by the authors that allows the visualization of the motion of a charged particle under the influence of magnetic and electric fields in 3D, at a level suitable for introductory physics courses. The software offers the possibility of studying a great number of physical situations that can…

  10. Visual perception and interception of falling objects: a review of evidence for an internal model of gravity.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-09-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. However, there are limitations in the visual system that raise questions about the general validity of these theories. Most notably, vision is poorly sensitive to arbitrary accelerations. How then does the brain deal with the motion of objects accelerated by Earth's gravity? Here we review evidence in favor of the view that the brain makes the best estimate about target motion based on visually measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from the expected kinetics in the Earth's gravitational field.

  11. Re-examining overlap between tactile and visual motion responses within hMT+ and STS

    PubMed Central

    Jiang, Fang; Beauchamp, Michael S.; Fine, Ione

    2015-01-01

    Here we examine overlap between tactile and visual motion BOLD responses within the human MT+ complex. Although several studies have reported tactile responses overlapping with hMT+, many used group average analyses, leaving it unclear whether these responses were restricted to sub-regions of hMT+. Moreover, previous studies either employed a tactile task or passive stimulation, leaving it unclear whether or not tactile responses in hMT+ are simply the consequence of visual imagery. Here we carried out a replication of one of the classic papers finding tactile responses in hMT+ (Hagen et al. 2002). We mapped MT and MST in individual subjects using visual field localizers. We then examined responses to tactile motion on the arm, either presented passively or in the presence of a visual task performed at fixation designed to minimize visualization of the concurrent tactile stimulation. To our surprise, without a visual task, we found only weak tactile motion responses in MT (6% of voxels showing tactile responses) and MST (2% of voxels). With an unrelated visual task designed to withdraw attention from the tactile modality, responses in MST reduced to almost nothing (<1% regions). Consistent with previous results, we did observe tactile responses in STS regions superior and anterior to hMT+. Despite the lack of individual overlap, group averaged responses produced strong spurious overlap between tactile and visual motion responses within hMT+ that resembled those observed in previous studies. The weak nature of tactile responses in hMT+ (and their abolition by withdrawal of attention) suggests that hMT+ may not serve as a supramodal motion processing module. PMID:26123373

  12. Dendro-dendritic interactions between motion-sensitive large-field neurons in the fly.

    PubMed

    Haag, Juergen; Borst, Alexander

    2002-04-15

    For visual course control, flies rely on a set of motion-sensitive neurons called lobula plate tangential cells (LPTCs). Among these cells, the so-called CH (centrifugal horizontal) cells shape by their inhibitory action the receptive field properties of other LPTCs called FD (figure detection) cells specialized for figure-ground discrimination based on relative motion. Studying the ipsilateral input circuitry of CH cells by means of dual-electrode and combined electrical-optical recordings, we find that CH cells receive graded input from HS (large-field horizontal system) cells via dendro-dendritic electrical synapses. This particular wiring scheme leads to a spatial blur of the motion image on the CH cell dendrite, and, after inhibiting FD cells, to an enhancement of motion contrast. This could be crucial for enabling FD cells to discriminate object from self motion.

  13. Structure from Motion

    DTIC Science & Technology

    1988-11-17

    NOTATION 17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if ntcestary and identify by block number) FIELD GROUP SUB-GROUP ,-.:image...ambiguity in the recognition of partially occluded objects. V 1 , t : ., , ’ -, L: \\ : _ 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21. ABSTRACT...constraints involved in the problem. More information can be found in [ 1 ]. Motion-based segmentation. Edge detection algorithms based on visual motion

  14. Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.

    PubMed

    Stöckl, A L; O'Carroll, D; Warrant, E J

    2017-06-28

    To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).

  15. Global versus local adaptation in fly motion-sensitive neurons

    PubMed Central

    Neri, Peter; Laughlin, Simon B

    2005-01-01

    Flies, like humans, experience a well-known consequence of adaptation to visual motion, the waterfall illusion. Direction-selective neurons in the fly lobula plate permit a detailed analysis of the mechanisms responsible for motion adaptation and their function. Most of these neurons are spatially non-opponent, they sum responses to motion in the preferred direction across their entire receptive field, and adaptation depresses responses by subtraction and by reducing contrast gain. When we adapted a small area of the receptive field to motion in its anti-preferred direction, we discovered that directional gain at unadapted regions was enhanced. This novel phenomenon shows that neuronal responses to the direction of stimulation in one area of the receptive field are dynamically adjusted to the history of stimulation both within and outside that area. PMID:16191636

  16. Memory and decision making in the frontal cortex during visual motion processing for smooth pursuit eye movements.

    PubMed

    Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro

    2009-06-11

    Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.

  17. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  18. Self-Monitoring of Gaze in High Functioning Autism

    ERIC Educational Resources Information Center

    Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques

    2012-01-01

    Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…

  19. Comparison of the visual perception of a runway model in pilots and nonpilots during simulated night landing approaches.

    DOT National Transportation Integrated Search

    1978-03-01

    At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...

  20. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning.

    PubMed

    Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly

    2018-01-01

    Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  1. Adaptive Gaze Strategies for Locomotion with Constricted Visual Field

    PubMed Central

    Authié, Colas N.; Berthoz, Alain; Sahel, José-Alain; Safran, Avinoam B.

    2017-01-01

    In retinitis pigmentosa (RP), loss of peripheral visual field accounts for most difficulties encountered in visuo-motor coordination during locomotion. The purpose of this study was to accurately assess the impact of peripheral visual field loss on gaze strategies during locomotion, and identify compensatory mechanisms. Nine RP subjects presenting a central visual field limited to 10–25° in diameter, and nine healthy subjects were asked to walk in one of three directions—straight ahead to a visual target, leftward and rightward through a door frame, with or without obstacle on the way. Whole body kinematics were recorded by motion capture, and gaze direction in space was reconstructed using an eye-tracker. Changes in gaze strategies were identified in RP subjects, including extensive exploration prior to walking, frequent fixations of the ground (even knowing no obstacle was present), of door edges, essentially of the proximal one, of obstacle edge/corner, and alternating door edges fixations when approaching the door. This was associated with more frequent, sometimes larger rapid-eye-movements, larger movements, and forward tilting of the head. Despite the visual handicap, the trajectory geometry was identical between groups, with a small decrease in walking speed in RPs. These findings identify the adaptive changes in sensory-motor coordination, in order to ensure visual awareness of the surrounding, detect changes in spatial configuration, collect information for self-motion, update the postural reference frame, and update egocentric distances to environmental objects. They are of crucial importance for the design of optimized rehabilitation procedures. PMID:28798674

  2. Evaluation of adaptation to visually induced motion sickness based on the maximum cross-correlation between pulse transmission time and heart rate.

    PubMed

    Sugita, Norihiro; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Watanabe, Takashi; Chiba, Shigeru; Yambe, Tomoyuki; Nitta, Shin-ichi

    2007-09-28

    Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index rho(max), which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in rho(max) with time. The physiological index, rho(max), will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.

  3. Differential responses in dorsal visual cortex to motion and disparity depth cues

    PubMed Central

    Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.

    2013-01-01

    We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808

  4. Emulating the visual receptive-field properties of MST neurons with a template model of heading estimation

    NASA Technical Reports Server (NTRS)

    Perrone, J. A.; Stone, L. S.

    1998-01-01

    We have proposed previously a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to those of neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. These detectors, arranged within cortical-like maps, were designed to extract self-translation (heading) and self-rotation, as well as the scene layout (relative distances) ahead of a moving observer. We then postulated that heading from optic flow is directly encoded by individual neurons acting as heading detectors within the medial superior temporal (MST) area. Others have questioned whether individual MST neurons can perform this function because some of their receptive-field properties seem inconsistent with this role. To resolve this issue, we systematically compared MST responses with those of detectors from two different configurations of the model under matched stimulus conditions. We found that the characteristic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support self-motion estimation via a direct encoding of heading and that the template model provides an explicit set of testable hypotheses that can guide future exploration of MST and adjacent areas within the superior temporal sulcus.

  5. The Complex Structure of Receptive Fields in the Middle Temporal Area

    PubMed Central

    Richert, Micah; Albright, Thomas D.; Krekelberg, Bart

    2012-01-01

    Neurons in the middle temporal area (MT) are often viewed as motion detectors that prefer a single direction of motion in a single region of space. This assumption plays an important role in our understanding of visual processing, and models of motion processing in particular. We used extracellular recordings in area MT of awake, behaving monkeys (M. mulatta) to test this assumption with a novel reverse correlation approach. Nearly half of the MT neurons in our sample deviated significantly from the classical view. First, in many cells, direction preference changed with the location of the stimulus within the receptive field. Second, the spatial response profile often had multiple peaks with apparent gaps in between. This shows that visual motion analysis in MT has access to motion detectors that are more complex than commonly thought. This complexity could be a mere byproduct of imperfect development, but can also be understood as the natural consequence of the non-linear, recurrent interactions among laterally connected MT neurons. An important direction for future research is to investigate whether these in homogeneities are advantageous, how they can be incorporated into models of motion detection, and whether they can provide quantitative insight into the underlying effective connectivity. PMID:23508640

  6. Dissection of Drosophila Visual Circuits Implicative in Figure Motion

    NASA Astrophysics Data System (ADS)

    Kelley, Ross G.

    The Drosophila visual system offers a model to study the foundations of how motion signals are computed from raw visual input and transformed into behavioral output. My studies focus on how specific cells in the Drosophila nervous system implement this input-output transformation. The individual cell types are known from classical studies using Golgi impregnations, but the assembly of motion processing circuits and the behavioral outputs remain poorly understood. Using an electronic flight simulator for flies and a white-noise analysis developed by Aptekar et al., I screen specific neurons in the optic lobes for behavioral ramifications. This approach produces wing responses to both the spatial and temporal dynamics of motion signals. The results of these experiments give Spatiotemporal Action Fields (STAFs) across the entire visual panorama. Genetically inactivating a distinct grouping of cells in the third optic ganglion, the Lobula Plate, the Horizontal System (HS) cell group, produced a robust phenotype through STAF analysis. Using the Gal4-UAS transgene expression system, we selectively inactivated the HS cells by expressing in their membrane inward rectifying potassium channels (Kir2.1) to hyperpolarize these cells, preventing their role in synaptic signaling. The results of the experiments show mutants lose steering responses to several distinct categories of figure motion and reduced behavioral responses to figure motion set against a contrasting moving background, highlighting their role in figure tracking behavior. Finally, a synapse inactivating protein, tetanus toxin (TNT), expressed in the HS cell group, produces a different behavioral phenotype than overexpressing inward rectifier. TNT, a bacterial neurotoxin, cleaves SNARE proteins resulting in loss of synaptic output of the cell, but the dendrites are intact and signal normally, preserving dendro-dendritic interactions known to sculpt the visual receptive fields of these cells. The two distinct phenotypes to each genetically targeted silencer differentiate the functional role of dendritic integration versus axonal output in this important cell group.

  7. Stroboscopic Vision as a Treatment for Retinal Slip Induced Motion Sickness

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Somers, J. T.; Ford, G.; Krnavek, J. M.; Hwang, E. J.; Leigh, R. J.; Estrada, A.

    2007-01-01

    Motion sickness in the general population is a significant problem driven by the increasingly more sophisticated modes of transportation, visual displays, and virtual reality environments. It is important to investigate non-pharmacological alternatives for the prevention of motion sickness for individuals who cannot tolerate the available anti-motion sickness drugs, or who are precluded from medication because of different operational environments. Based on the initial work of Melvill Jones, in which post hoc results indicated that motion sickness symptoms were prevented during visual reversal testing when stroboscopic vision was used to prevent retinal slip, we have evaluated stroboscopic vision as a method of preventing motion sickness in a number of different environments. Specifically, we have undertaken a five part study that was designed to investigate the effect of stroboscopic vision (either with a strobe light or LCD shutter glasses) on motion sickness while: (1) using visual field reversal, (2) reading while riding in a car (with or without external vision present), (3) making large pitch head movements during parabolic flight, (4) during exposure to rough seas in a small boat, and (5) seated and reading in the cabin area of a UH60 Black Hawk Helicopter during 20 min of provocative flight patterns.

  8. Phantom motion after effects--evidence of detectors for the analysis of optic flow.

    PubMed

    Snowden, R J; Milne, A B

    1997-10-01

    Electrophysiological recording from the extrastriate cortex of non-human primates has revealed neurons that have large receptive fields and are sensitive to various components of object or self movement, such as translations, rotations and expansion/contractions. If these mechanisms exist in human vision, they might be susceptible to adaptation that generates motion aftereffects (MAEs). Indeed, it might be possible to adapt the mechanism in one part of the visual field and reveal what we term a 'phantom MAE' in another part. The existence of phantom MAEs was probed by adapting to a pattern that contained motion in only two non-adjacent 'quarter' segments and then testing using patterns that had elements in only the other two segments. We also tested for the more conventional 'concrete' MAE by testing in the same two segments that had adapted. The strength of each MAE was quantified by measuring the percentage of dots that had to be moved in the opposite direction to the MAE in order to nullify it. Four experiments tested rotational motion, expansion/contraction motion, translational motion and a 'rotation' that consisted simply of the two segments that contained only translational motions of opposing direction. Compared to a baseline measurement where no adaptation took place, all subjects in all experiments exhibited both concrete and phantom MAEs, with the size of the latter approximately half that of the former. Adaptation to two segments that contained upward and downward motion induced the perception of leftward and rightward motion in another part of the visual field. This strongly suggests there are mechanisms in human vision that are sensitive to complex motions such as rotations.

  9. Summation of visual motion across eye movements reflects a nonspatial decision mechanism.

    PubMed

    Morris, Adam P; Liu, Charles C; Cropper, Simon J; Forte, Jason D; Krekelberg, Bart; Mattingley, Jason B

    2010-07-21

    Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g., "spatiotopic" receptive fields; Melcher and Morrone, 2003). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location before the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed regardless of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.

  10. Can biological motion research provide insight on how to reduce friendly fire incidents?

    PubMed

    Steel, Kylie A; Baxter, David; Dogramaci, Sera; Cobley, Stephen; Ellem, Eathan

    2016-10-01

    The ability to accurately detect, perceive, and recognize biological motion can be associated with a fundamental drive for survival, and it is a significant interest for perception researchers. This field examines various perceptual features of motion and has been assessed and applied in several real-world contexts (e.g., biometric, sport). Unexplored applications still exist however, including the military issue of friendly fire. There are many causes and processes leading to friendly fire and specific challenges that are associated with visual information extraction during engagement, such as brief glimpses, low acuity, camouflage, and uniform deception. Furthermore, visual information must often be processed under highly stressful (potentially threatening), time-constrained conditions that present a significant problem for soldiers. Biological motion research and anecdotal evidence from experienced combatants suggests that intentions, emotions, identities of human motion can be identified and discriminated, even when visual display is degraded or limited. Furthermore, research suggests that perceptual discriminatory capability of movement under visually constrained conditions is trainable. Therefore, given the limited military research linked to biological motion and friendly fire, an opportunity for cross-disciplinary investigations exists. The focus of this paper is twofold: first, to provide evidence for the possible link between biological motion factors and friendly fire, and second, to propose conceptual and methodological considerations and recommendations for perceptual-cognitive training within current military programs.

  11. Dual processing of visual rotation for bipedal stance control.

    PubMed

    Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene

    2016-10-01

    When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  12. Effects of set-size and selective spatial attention on motion processing.

    PubMed

    Dobkins, K R; Bosworth, R G

    2001-05-01

    In order to investigate the effects of divided attention and selective spatial attention on motion processing, we obtained direction-of-motion thresholds using a stochastic motion display under various attentional manipulations and stimulus durations (100-600 ms). To investigate divided attention, we compared motion thresholds obtained when a single motion stimulus was presented in the visual field (set-size=1) to those obtained when the motion stimulus was presented amongst three confusable noise distractors (set-size=4). The magnitude of the observed detriment in performance with an increase in set-size from 1 to 4 could be accounted for by a simple decision model based on signal detection theory, which assumes that attentional resources are not limited in capacity. To investigate selective attention, we compared motion thresholds obtained when a valid pre-cue alerted the subject to the location of the to-be-presented motion stimulus to those obtained when no pre-cue was provided. As expected, the effect of pre-cueing was large when the visual field contained noise distractors, an effect we attribute to "noise reduction" (i.e. the pre-cue allows subjects to exclude irrelevant distractors that would otherwise impair performance). In the single motion stimulus display, we found a significant benefit of pre-cueing only at short durations (< or =150 ms), a result that can potentially be explained by a "time-to-orient" hypothesis (i.e. the pre-cue improves performance by eliminating the time it takes to orient attention to a peripheral stimulus at its onset, thereby increasing the time spent processing the stimulus). Thus, our results suggest that the visual motion system can analyze several stimuli simultaneously without limitations on sensory processing per se, and that spatial pre-cueing serves to reduce the effects of distractors and perhaps increase the effective processing time of the stimulus.

  13. Effects of Attention and Laterality on Motion and Orientation Discrimination in Deaf Signers

    ERIC Educational Resources Information Center

    Bosworth, Rain G.; Petrich, Jennifer A. F.; Dobkins, Karen R.

    2013-01-01

    Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left…

  14. Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque

    PubMed Central

    Kaneko, Takaaki; Saleem, Kadharbatcha S.; Berman, Rebecca A.; Leopold, David A.

    2016-01-01

    Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. SIGNIFICANCE STATEMENT Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This “reafferent” motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. PMID:27629710

  15. Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque.

    PubMed

    Russ, Brian E; Kaneko, Takaaki; Saleem, Kadharbatcha S; Berman, Rebecca A; Leopold, David A

    2016-09-14

    Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This "reafferent" motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. Copyright © 2016 the authors 0270-6474/16/369580-10$15.00/0.

  16. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    PubMed

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  17. The influence of an immersive virtual environment on the segmental organization of postural stabilizing responses.

    PubMed

    Keshner, E A; Kenyon, R V

    2000-01-01

    We examined the effect of a 3-dimensional stereoscopic scene on segmental stabilization. Eight subjects participated in static sway and locomotion experiments with a visual scene that moved sinusoidally or at constant velocity about the pitch or roll axes. Segmental displacements, Fast Fourier Transforms, and Root Mean Square values were calculated. In both pitch and roll, subjects exhibited greater magnitudes of motion in head and trunk than ankle. Smaller amplitudes and frequent phase reversals suggested control of the ankle by segmental proprioceptive inputs and ground reaction forces rather than by the visual-vestibular signals. Postural controllers may set limits of motion at each body segment rather than be governed solely by a perception of the visual vertical. Two locomotor strategies were also exhibited, implying that some subjects could override the effect of the roll axis optic flow field. Our results demonstrate task dependent differences that argue against using static postural responses to moving visual fields when assessing more dynamic tasks.

  18. Emulating the Visual Receptive Field Properties of MST Neurons with a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Perrone, John A.; Stone, Leland S.

    1997-01-01

    We have previously proposed a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. The model detectors were designed to extract self-translation (heading), self-rotation, as well as the scene layout (relative distances) ahead of a moving observer, and are arranged in cortical-like heading maps to perform this function. Heading estimation from optic flow has been postulated by some to be implemented within the medial superior temporal (MST) area. Others have questioned whether MST neurons can fulfill this role because some of their receptive-field properties appear inconsistent with a role in heading estimation. To resolve this issue, we systematically compared MST single-unit responses with the outputs of model detectors under matched stimulus conditions. We found that the basic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support heading estimation and that the template model provides an explicit set of testable hypotheses which can guide future exploration of MST and adjacent areas within the primate superior temporal sulcus.

  19. Improved Visual Cognition through Stroboscopic Training

    PubMed Central

    Appelbaum, L. Gregory; Schroeder, Julia E.; Cain, Matthew S.; Mitroff, Stephen R.

    2011-01-01

    Humans have a remarkable capacity to learn and adapt, but surprisingly little research has demonstrated generalized learning in which new skills and strategies can be used flexibly across a range of tasks and contexts. In the present work we examined whether generalized learning could result from visual–motor training under stroboscopic visual conditions. Individuals were assigned to either an experimental condition that trained with stroboscopic eyewear or to a control condition that underwent identical training with non-stroboscopic eyewear. The training consisted of multiple sessions of athletic activities during which participants performed simple drills such as throwing and catching. To determine if training led to generalized benefits, we used computerized measures to assess perceptual and cognitive abilities on a variety of tasks before and after training. Computer-based assessments included measures of visual sensitivity (central and peripheral motion coherence thresholds), transient spatial attention (a useful field of view – dual task paradigm), and sustained attention (multiple-object tracking). Results revealed that stroboscopic training led to significantly greater re-test improvement in central visual field motion sensitivity and transient attention abilities. No training benefits were observed for peripheral motion sensitivity or peripheral transient attention abilities, nor were benefits seen for sustained attention during multiple-object tracking. These findings suggest that stroboscopic training can effectively improve some, but not all aspects of visual perception and attention. PMID:22059078

  20. Oscillatory flow in the cochlea visualized by a magnetic resonance imaging technique.

    PubMed

    Denk, W; Keolian, R M; Ogawa, S; Jelinski, L W

    1993-02-15

    We report a magnetic resonance imaging technique that directly measures motion of cochlear fluids. It uses oscillating magnetic field gradients phase-locked to an external stimulus to selectively visualize and quantify oscillatory fluid motion. It is not invasive, and it does not require optical line-of-sight access to the inner ear. It permits the detection of displacements far smaller than the spatial resolution. The method is demonstrated on a phantom and on living rats. It is projected to have applications for auditory research, for the visualization of vocal tract dynamics during speech and singing, and for determination of the spatial distribution of mechanical relaxations in materials.

  1. Selectivity to Translational Egomotion in Human Brain Motion Areas

    PubMed Central

    Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare

    2013-01-01

    The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096

  2. New rules for visual selection: Isolating procedural attention.

    PubMed

    Ramamurthy, Mahalakshmi; Blaser, Erik

    2017-02-01

    High performance in well-practiced, everyday tasks-driving, sports, gaming-suggests a kind of procedural attention that can allocate processing resources to behaviorally relevant information in an unsupervised manner. Here we show that training can lead to a new, automatic attentional selection rule that operates in the absence of bottom-up, salience-driven triggers and willful top-down selection. Taking advantage of the fact that attention modulates motion aftereffects, observers were presented with a bivectorial display with overlapping, iso-salient red and green dot fields moving to the right and left, respectively, while distracted by a demanding auditory two-back memory task. Before training, since the motion vectors canceled each other out, no net motion aftereffect (MAE) was found. However, after 3 days (0.5 hr/day) of training, during which observers practiced selectively attending to the red, rightward field, a significant net MAE was observed-even when top-down selection was again distracted. Further experiments showed that these results were not due to perceptual learning, and that the new rule targeted the motion, and not the color of the target dot field, and global, not local, motion signals; thus, the new rule was: "select the rightward field." This study builds on recent work on selection history-driven and reward-driven biases, but uses a novel paradigm where the allocation of visual processing resources are measured passively, offline, and when the observer's ability to execute top-down selection is defeated.

  3. Cholinergic enhancement augments magnitude and specificity of visual perceptual learning in healthy humans

    PubMed Central

    Rokem, Ariel; Silver, Michael A.

    2010-01-01

    Summary Learning through experience underlies the ability to adapt to novel tasks and unfamiliar environments. However, learning must be regulated so that relevant aspects of the environment are selectively encoded. Acetylcholine (ACh) has been suggested to regulate learning by enhancing the responses of sensory cortical neurons to behaviorally-relevant stimuli [1]. In this study, we increased synaptic levels of ACh in the brains of healthy human subjects with the cholinesterase inhibitor donepezil (trade name: Aricept) and measured the effects of this cholinergic enhancement on visual perceptual learning. Each subject completed two five-day courses of training on a motion direction discrimination task [2], once while ingesting 5 mg of donepezil before every training session and once while placebo was administered. We found that cholinergic enhancement augmented perceptual learning for stimuli having the same direction of motion and visual field location used during training. In addition, perceptual learning under donepezil was more selective to the trained direction of motion and visual field location. These results, combined with previous studies demonstrating an increase in neuronal selectivity following cholinergic enhancement [3–5], suggest a possible mechanism by which ACh augments neural plasticity by directing activity to populations of neurons that encode behaviorally-relevant stimulus features. PMID:20850321

  4. Ethylene glycol revisited: Molecular dynamics simulations and visualization of the liquid and its hydrogen-bond network☆

    PubMed Central

    Kaiser, Alexander; Ismailova, Oksana; Koskela, Antti; Huber, Stefan E.; Ritter, Marcel; Cosenza, Biagio; Benger, Werner; Nazmutdinov, Renat; Probst, Michael

    2014-01-01

    Molecular dynamics simulations of liquid ethylene glycol described by the OPLS-AA force field were performed to gain insight into its hydrogen-bond structure. We use the population correlation function as a statistical measure for the hydrogen-bond lifetime. In an attempt to understand the complicated hydrogen-bonding, we developed new molecular visualization tools within the Vish Visualization shell and used it to visualize the life of each individual hydrogen-bond. With this tool hydrogen-bond formation and breaking as well as clustering and chain formation in hydrogen-bonded liquids can be observed directly. Liquid ethylene glycol at room temperature does not show significant clustering or chain building. The hydrogen-bonds break often due to the rotational and vibrational motions of the molecules leading to an H-bond half-life time of approximately 1.5 ps. However, most of the H-bonds are reformed again so that after 50 ps only 40% of these H-bonds are irreversibly broken due to diffusional motion. This hydrogen-bond half-life time due to diffusional motion is 80.3 ps. The work was preceded by a careful check of various OPLS-based force fields used in the literature. It was found that they lead to quite different angular and H-bond distributions. PMID:24748697

  5. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  6. Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.

    PubMed

    Barre, Arnaud; Armand, Stéphane

    2014-04-01

    C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Visual Basic VPython Interface: Charged Particle in a Magnetic Field

    NASA Astrophysics Data System (ADS)

    Prayaga, Chandra

    2006-12-01

    A simple Visual Basic (VB) to VPython interface is described and illustrated with the example of a charged particle in a magnetic field. This interface allows data to be passed to Python through a text file read by Python. The first component of the interface is a user-friendly data entry screen designed in VB, in which the user can input values of the charge, mass, initial position and initial velocity of the particle, and the magnetic field. Next, a command button is coded to write these values to a text file. Another command button starts the VPython program, which reads the data from the text file, numerically solves the equation of motion, and provides the 3d graphics animation. Students can use the interface to run the program several times with different data and observe changes in the motion.

  8. Motion perception tasks as potential correlates to driving difficulty in the elderly

    NASA Astrophysics Data System (ADS)

    Raghuram, A.; Lakshminarayanan, V.

    2006-09-01

    Changes in the demographics indicates that the population older than 65 is on the rise because of the aging of the ‘baby boom’ generation. This aging trend and driving related accident statistics reveal the need for procedures and tests that would assess the driving ability of older adults and predict whether they would be safe or unsafe drivers. Literature shows that an attention based test called the useful field of view (UFOV) was a significant predictor of accident rates compared to any other visual function tests. The present study evaluates a qualitative trend on using motion perception tasks as a potential visual perceptual correlates in screening elderly drivers who might have difficulty in driving. Data was collected from 15 older subjects with a mean age of 71. Motion perception tasks included—speed discrimination with radial and lamellar motion, time to collision using prediction motion and estimating direction of heading. A motion index score was calculated which was indicative of performance on all of the above-mentioned motion tasks. Scores on visual attention was assessed using UFOV. A driving habit questionnaire was also administered for a self report on the driving difficulties and accident rates. A qualitative trend based on frequency distributions show that thresholds on the motion perception tasks are successful in identifying subjects who reported to have had difficulty in certain aspects of driving and had accidents. Correlation between UFOV and motion index scores was not significant indicating that probably different aspects of visual information processing that are crucial to driving behaviour are being tapped by these two paradigms. UFOV and motion perception tasks together can be a better predictor for identifying at risk or safe drivers than just using either one of them.

  9. Blurred digital mammography images: an analysis of technical recall and observer detection performance.

    PubMed

    Ma, Wang Kei; Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter

    2017-03-01

    Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ 2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ 2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring.

  10. Blurred digital mammography images: an analysis of technical recall and observer detection performance

    PubMed Central

    Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter

    2017-01-01

    Objective: Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. Methods: 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. Results: The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. Conclusion: According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring. PMID:28134567

  11. Motion cue effects on human pilot dynamics in manual control

    NASA Technical Reports Server (NTRS)

    Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.

    1977-01-01

    Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.

  12. Postural Instability Induced by Visual Motion Stimuli in Patients With Vestibular Migraine

    PubMed Central

    Lim, Yong-Hyun; Kim, Ji-Soo; Lee, Ho-Won; Kim, Sung-Hee

    2018-01-01

    Patients with vestibular migraine are susceptible to motion sickness. This study aimed to determine whether the severity of posture instability is related to the susceptibility to motion sickness. We used a visual motion paradigm with two conditions of the stimulated retinal field and the head posture to quantify postural stability while maintaining a static stance in 18 patients with vestibular migraine and in 13 age-matched healthy subjects. Three parameters of postural stability showed differences between VM patients and controls: RMS velocity (0.34 ± 0.02 cm/s vs. 0.28 ± 0.02 cm/s), RMS acceleration (8.94 ± 0.74 cm/s2 vs. 6.69 ± 0.87 cm/s2), and sway area (1.77 ± 0.22 cm2 vs. 1.04 ± 0.25 cm2). Patients with vestibular migraine showed marked postural instability of the head and neck when visual stimuli were presented in the retinal periphery. The pseudo-Coriolis effect induced by head roll tilt was not responsible for the main differences in postural instability between patients and controls. Patients with vestibular migraine showed a higher visual dependency and low stability of the postural control system when maintaining quiet standing, which may be related to susceptibility to motion sickness. PMID:29930534

  13. Postural Instability Induced by Visual Motion Stimuli in Patients With Vestibular Migraine.

    PubMed

    Lim, Yong-Hyun; Kim, Ji-Soo; Lee, Ho-Won; Kim, Sung-Hee

    2018-01-01

    Patients with vestibular migraine are susceptible to motion sickness. This study aimed to determine whether the severity of posture instability is related to the susceptibility to motion sickness. We used a visual motion paradigm with two conditions of the stimulated retinal field and the head posture to quantify postural stability while maintaining a static stance in 18 patients with vestibular migraine and in 13 age-matched healthy subjects. Three parameters of postural stability showed differences between VM patients and controls: RMS velocity (0.34 ± 0.02 cm/s vs. 0.28 ± 0.02 cm/s), RMS acceleration (8.94 ± 0.74 cm/s 2 vs. 6.69 ± 0.87 cm/s 2 ), and sway area (1.77 ± 0.22 cm 2 vs. 1.04 ± 0.25 cm 2 ). Patients with vestibular migraine showed marked postural instability of the head and neck when visual stimuli were presented in the retinal periphery. The pseudo-Coriolis effect induced by head roll tilt was not responsible for the main differences in postural instability between patients and controls. Patients with vestibular migraine showed a higher visual dependency and low stability of the postural control system when maintaining quiet standing, which may be related to susceptibility to motion sickness.

  14. A neurocomputational model of figure-ground discrimination and target tracking.

    PubMed

    Sun, H; Liu, L; Guo, A

    1999-01-01

    A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.

  15. Research on integration of visual and motion cues for flight simulation and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Curry, R. E.

    1977-01-01

    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.

  16. Differential effect of visual motion adaption upon visual cortical excitability.

    PubMed

    Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer

    2017-03-01

    The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.

  17. Open angle glaucoma effects on preattentive visual search efficiency for flicker, motion displacement and orientation pop-out tasks.

    PubMed

    Loughman, James; Davison, Peter; Flitcroft, Ian

    2007-11-01

    Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.

  18. Visual Field Map Clusters in High-Order Visual Processing: Organization of V3A/V3B and a New Cloverleaf Cluster in the Posterior Superior Temporal Sulcus

    PubMed Central

    Barton, Brian; Brewer, Alyssa A.

    2017-01-01

    The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182

  19. Evidence that primary visual cortex is required for image, orientation, and motion discrimination by rats.

    PubMed

    Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela

    2013-01-01

    The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.

  20. A Tool for the Analysis of Motion Picture Film or Video Tape.

    ERIC Educational Resources Information Center

    Ekman, Paul; Friesen, Wallace V.

    1969-01-01

    A visual information display and retrieval system (VID-R) is described for application to visual records. VID-R searches and retrieves events by time address (location) or by previously stored ovservations or measurements. Fields are labeled by writing discriminable binary addresses on the horizontal lines outside the normal viewing area. The…

  1. Pilot/vehicle model analysis of visual and motion cue requirements in flight simulation. [helicopter hovering

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Zacharias, G.

    1980-01-01

    The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.

  2. Anticipatory Attentional Suppression of Visual Features Indexed by Oscillatory Alpha-Band Power Increases: A High-Density Electrical Mapping Study

    PubMed Central

    Snyder, Adam C.; Foxe, John J.

    2010-01-01

    Retinotopically specific increases in alpha-band (~10 Hz) oscillatory power have been strongly implicated in the suppression of processing for irrelevant parts of the visual field during the deployment of visuospatial attention. Here, we asked whether this alpha suppression mechanism also plays a role in the nonspatial anticipatory biasing of feature-based attention. Visual word cues informed subjects what the task-relevant feature of an upcoming visual stimulus (S2) was, while high-density electroencephalographic recordings were acquired. We examined anticipatory oscillatory activity in the Cue-to-S2 interval (~2 s). Subjects were cued on a trial-by-trial basis to attend to either the color or direction of motion of an upcoming dot field array, and to respond when they detected that a subset of the dots differed from the majority along the target feature dimension. We used the features of color and motion, expressly because they have well known, spatially separated cortical processing areas, to distinguish shifts in alpha power over areas processing each feature. Alpha power from dorsal regions increased when motion was the irrelevant feature (i.e., color was cued), and alpha power from ventral regions increased when color was irrelevant. Thus, alpha-suppression mechanisms appear to operate during feature-based selection in much the same manner as has been shown for space-based attention. PMID:20237273

  3. Local and Global Correlations between Neurons in the Middle Temporal Area of Primate Visual Cortex.

    PubMed

    Solomon, Selina S; Chen, Spencer C; Morley, John W; Solomon, Samuel G

    2015-09-01

    In humans and other primates, the analysis of visual motion includes populations of neurons in the middle-temporal (MT) area of visual cortex. Motion analysis will be constrained by the structure of neural correlations in these populations. Here, we use multi-electrode arrays to measure correlations in anesthetized marmoset, a New World monkey where area MT lies exposed on the cortical surface. We measured correlations in the spike count between pairs of neurons and within populations of neurons, for moving dot fields and moving gratings. Correlations were weaker in area MT than in area V1. The magnitude of correlations in area MT diminished with distance between receptive fields, and difference in preferred direction. Correlations during presentation of moving gratings were stronger than those during presentation of moving dot fields, extended further across cortex, and were less dependent on the functional properties of neurons. Analysis of the timescales of correlation suggests presence of 2 mechanisms. A local mechanism, associated with near-synchronous spiking activity, is strongest in nearby neurons with similar direction preference and is independent of visual stimulus. A global mechanism, operating over larger spatial scales and longer timescales, is independent of direction preference and is modulated by the type of visual stimulus presented. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  5. Wide-field motion tuning in nocturnal hawkmoths

    PubMed Central

    Theobald, Jamie C.; Warrant, Eric J.; O'Carroll, David C.

    2010-01-01

    Nocturnal hawkmoths are known for impressive visually guided behaviours in dim light, such as hovering while feeding from nectar-bearing flowers. This requires tight visual feedback to estimate and counter relative motion. Discrimination of low velocities, as required for stable hovering flight, is fundamentally limited by spatial resolution, yet in the evolution of eyes for nocturnal vision, maintenance of high spatial acuity compromises absolute sensitivity. To investigate these trade-offs, we compared responses of wide-field motion-sensitive neurons in three species of hawkmoth: Manduca sexta (a crepuscular hoverer), Deilephila elpenor (a fully nocturnal hoverer) and Acherontia atropos (a fully nocturnal hawkmoth that does not hover as it feeds uniquely from honey in bees' nests). We show that despite smaller eyes, the motion pathway of D. elpenor is tuned to higher spatial frequencies and lower temporal frequencies than A. atropos, consistent with D. elpenor's need to detect low velocities for hovering. Acherontia atropos, however, presumably evolved low-light sensitivity without sacrificing temporal acuity. Manduca sexta, active at higher light levels, is tuned to the highest spatial frequencies of the three and temporal frequencies comparable with A. atropos. This yields similar tuning to low velocities as in D. elpenor, but with the advantage of shorter neural delays in processing motion. PMID:19906663

  6. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    NASA Astrophysics Data System (ADS)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  7. Self-synchronizing Schlieren photography and interferometry for the visualization of unsteady transonic flows

    NASA Technical Reports Server (NTRS)

    Kadlec, R.

    1979-01-01

    The use of self synchronizing stroboscopic Schlieren and laser interferometer systems to obtain quantitative space time measurements of distinguished flow surfaces, steakline patterns, and the density field of two dimensional flows that exhibit a periodic content was investigated. A large field single path stroboscopic Schlieren system was designed, constructed and successfully applied to visualize four periodic flows: near wake behind an oscillating airfoil; edge tone sound generation; 2-D planar wall jet; and axisymmetric pulsed sonic jet. This visualization technique provides an effective means of studying quasi-periodic flows in real time. The image on the viewing screen is a spatial signal average of the coherent periodic motion rather than a single realization, the high speed motion of a quasi-periodic flow can be reconstructed by recording photographs of the flow at different fixed time delays in one cycle. The preliminary design and construction of a self synchronizing stroboscopic laser interferometer with a modified Mach-Zehnder optical system is also reported.

  8. Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories

    PubMed Central

    Pretto, Paolo; Oberfeld, Daniel; Hecht, Heiko; Bülthoff, Heinrich H.

    2017-01-01

    This study investigated the role of vection (i.e., a visually induced sense of self-motion), optokinetic nystagmus (OKN), and inadvertent head movements in visually induced motion sickness (VIMS), evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent), and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent). In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48). Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS. PMID:28380077

  9. Mobile assistive technologies for the visually impaired.

    PubMed

    Hakobyan, Lilit; Lumsden, Jo; O'Sullivan, Dympna; Bartlett, Hannah

    2013-01-01

    There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Effects of Spatial and Feature Attention on Disparity-Rendered Structure-From-Motion Stimuli in the Human Visual Cortex

    PubMed Central

    Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.

    2014-01-01

    An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974

  11. Transformation-aware perceptual image metric

    NASA Astrophysics Data System (ADS)

    Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2016-09-01

    Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax.

  12. Basic quantitative assessment of visual performance in patients with very low vision.

    PubMed

    Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert

    2010-02-01

    A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.

  13. Flow Visualization Studies in the Novacor Left Ventricular Assist System CRADA PC91-002, Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovetz, H.S.; Shaffer, F.; Schaub, R.

    This paper discusses a series of experiments to visualize and measure flow fields in the Novacor left ventricular assist system (LVAS). The experiments utilize a multiple exposure, optical imaging technique called fluorescent image tracking velocimetry (FITV) to hack the motion of small, neutrally-buoyant particles in a flowing fluid.

  14. ARC-2008-ACD08-0157-005

    NASA Image and Video Library

    2008-07-28

    NASA AA - Associate Administrator for Aeronautics Jai Shin visits Ames Research Center and tours the Vertical Motion Simulator (VMS, T-cab) Jaiwon Shin, Moffett Field Hangar 1 shows in the VMS visual scene.

  15. Plasticity Beyond V1: Reinforcement of Motion Perception upon Binocular Central Retinal Lesions in Adulthood.

    PubMed

    Burnat, Kalina; Hu, Tjing-Tjing; Kossut, Małgorzata; Eysel, Ulf T; Arckens, Lutgarde

    2017-09-13

    Induction of a central retinal lesion in both eyes of adult mammals is a model for macular degeneration and leads to retinotopic map reorganization in the primary visual cortex (V1). Here we characterized the spatiotemporal dynamics of molecular activity levels in the central and peripheral representation of five higher-order visual areas, V2/18, V3/19, V4/21a,V5/PMLS, area 7, and V1/17, in adult cats with central 10° retinal lesions (both sexes), by means of real-time PCR for the neuronal activity reporter gene zif268. The lesions elicited a similar, permanent reduction in activity in the center of the lesion projection zone of area V1/17, V2/18, V3/19, and V4/21a, but not in the motion-driven V5/PMLS, which instead displayed an increase in molecular activity at 3 months postlesion, independent of visual field coordinates. Also area 7 only displayed decreased activity in its LPZ in the first weeks postlesion and increased activities in its periphery from 1 month onward. Therefore we examined the impact of central vision loss on motion perception using random dot kinematograms to test the capacity for form from motion detection based on direction and velocity cues. We revealed that the central retinal lesions either do not impair motion detection or even result in better performance, specifically when motion discrimination was based on velocity discrimination. In conclusion, we propose that central retinal damage leads to enhanced peripheral vision by sensitizing the visual system for motion processing relying on feedback from V5/PMLS and area 7. SIGNIFICANCE STATEMENT Central retinal lesions, a model for macular degeneration, result in functional reorganization of the primary visual cortex. Examining the level of cortical reactivation with the molecular activity marker zif268 revealed reorganization in visual areas outside V1. Retinotopic lesion projection zones typically display an initial depression in zif268 expression, followed by partial recovery with postlesion time. Only the motion-sensitive area V5/PMLS shows no decrease, and even a significant activity increase at 3 months post-retinal lesion. Behavioral tests of motion perception found no impairment and even better sensitivity to higher random dot stimulus velocities. We demonstrate that the loss of central vision induces functional mobilization of motion-sensitive visual cortex, resulting in enhanced perception of moving stimuli. Copyright © 2017 the authors 0270-6474/17/378989-11$15.00/0.

  16. Conjunctions between motion and disparity are encoded with the same spatial resolution as disparity alone.

    PubMed

    Allenmark, Fredrik; Read, Jenny C A

    2012-10-10

    Neurons in cortical area MT respond well to transparent streaming motion in distinct depth planes, such as caused by observer self-motion, but do not contain subregions excited by opposite directions of motion. We therefore predicted that spatial resolution for transparent motion/disparity conjunctions would be limited by the size of MT receptive fields, just as spatial resolution for disparity is limited by the much smaller receptive fields found in primary visual cortex, V1. We measured this using a novel "joint motion/disparity grating," on which human observers detected motion/disparity conjunctions in transparent random-dot patterns containing dots streaming in opposite directions on two depth planes. Surprisingly, observers showed the same spatial resolution for these as for pure disparity gratings. We estimate the limiting receptive field diameter at 11 arcmin, similar to V1 and much smaller than MT. Higher internal noise for detecting joint motion/disparity produces a slightly lower high-frequency cutoff of 2.5 cycles per degree (cpd) versus 3.3 cpd for disparity. This suggests that information on motion/disparity conjunctions is available in the population activity of V1 and that this information can be decoded for perception even when it is invisible to neurons in MT.

  17. Continuous visual field motion impacts the postural responses of older and younger women during and after support surface tilt

    PubMed Central

    Lauer, Richard T.; Keshner, Emily A.

    2011-01-01

    The effect of continuous visual flow on the ability to regain and maintain postural orientation was examined. Fourteen young (20–39 years old) and 14 older women (60–79 years old) stood quietly during 3° (30°/s) dorsiflexion tilt of the support surface combined with 30° and 45°/s upward or downward pitch rotations of the visual field. The support surface was held tilted for 30 s and then returned to neutral over a 30-s period while the visual field continued to rotate. Segmental displacement and bilateral tibialis anterior and gastrocnemius muscle EMG responses were recorded. Continuous wavelet transforms were calculated for each muscle EMG response. An instantaneous mean frequency curve (IMNF) of muscle activity, center of mass (COM), center of pressure (COP), and angular excursion at the hip and ankle were used in a functional principal component analysis (fPCA). Functional component weights were calculated and compared with mixed model repeated measures ANOVAs. The fPCA revealed greatest mathematical differences in COM and COP responses between groups or conditions during the period that the platform transitioned from the sustained tilt to a return to neutral position. Muscle EMG responses differed most in the period following support surface tilt indicating that muscle activity increased to support stabilization against the visual flow. Older women exhibited significantly larger COM and COP responses in the direction of visual field motion and less muscle modulation when the platform returned to neutral than younger women. Results on a Rod and Frame test indicated that older women were significantly more visually dependent than the younger women. We concluded that a stiffer body combined with heightened visual sensitivity in older women critically interferes with their ability to counteract posturally destabilizing environments. PMID:21479659

  18. Active contour-based visual tracking by integrating colors, shapes, and motions.

    PubMed

    Hu, Weiming; Zhou, Xue; Li, Wei; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen

    2013-05-01

    In this paper, we present a framework for active contour-based visual tracking using level sets. The main components of our framework include contour-based tracking initialization, color-based contour evolution, adaptive shape-based contour evolution for non-periodic motions, dynamic shape-based contour evolution for periodic motions, and the handling of abrupt motions. For the initialization of contour-based tracking, we develop an optical flow-based algorithm for automatically initializing contours at the first frame. For the color-based contour evolution, Markov random field theory is used to measure correlations between values of neighboring pixels for posterior probability estimation. For adaptive shape-based contour evolution, the global shape information and the local color information are combined to hierarchically evolve the contour, and a flexible shape updating model is constructed. For the dynamic shape-based contour evolution, a shape mode transition matrix is learnt to characterize the temporal correlations of object shapes. For the handling of abrupt motions, particle swarm optimization is adopted to capture the global motion which is applied to the contour in the current frame to produce an initial contour in the next frame.

  19. Speed tuning of motion segmentation and discrimination

    NASA Technical Reports Server (NTRS)

    Masson, G. S.; Mestre, D. R.; Stone, L. S.

    1999-01-01

    Motion transparency requires that the visual system distinguish different motion vectors and selectively integrate similar motion vectors over space into the perception of multiple surfaces moving through or over each other. Using large-field (7 degrees x 7 degrees) displays containing two populations of random-dots moving in the same (horizontal) direction but at different speeds, we examined speed-based segmentation by measuring the speed difference above which observers can perceive two moving surfaces. We systematically investigated this 'speed-segmentation' threshold as a function of speed and stimulus duration, and found that it increases sharply for speeds above approximately 8 degrees/s. In addition, speed-segmentation thresholds decrease with stimulus duration out to approximately 200 ms. In contrast, under matched conditions, speed-discrimination thresholds stay low at least out to 16 degrees/s and decrease with increasing stimulus duration at a faster rate than for speed segmentation. Thus, motion segmentation and motion discrimination exhibit different speed selectivity and different temporal integration characteristics. Results are discussed in terms of the speed preferences of different neuronal populations within the primate visual cortex.

  20. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    PubMed

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  1. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  2. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  3. Simple quality assurance method of dynamic tumor tracking with the gimbaled linac system using a light field.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi

    2016-09-08

    We proposed a simple visual method for evaluating the dynamic tumor tracking (DTT) accuracy of a gimbal mechanism using a light field. A single photon beam was set with a field size of 30 × 30 mm2 at a gantry angle of 90°. The center of a cube phantom was set up at the isocenter of a motion table, and 4D modeling was performed based on the tumor and infrared (IR) marker motion. After 4D modeling, the cube phantom was replaced with a sheet of paper, which was placed perpen-dicularly, and a light field was projected on the sheet of paper. The light field was recorded using a web camera in a treatment room that was as dark as possible. Calculated images from each image obtained using the camera were summed to compose a total summation image. Sinusoidal motion sequences were produced by moving the phantom with a fixed amplitude of 20 mm and different breathing periods of 2, 4, 6, and 8 s. The light field was projected on the sheet of paper under three conditions: with the moving phantom and DTT based on the motion of the phantom, with the moving phantom and non-DTT, and with a stationary phantom for comparison. The values of tracking errors using the light field were 1.12 ± 0.72, 0.31 ± 0.19, 0.27 ± 0.12, and 0.15 ± 0.09 mm for breathing periods of 2, 4, 6, and 8s, respectively. The tracking accuracy showed dependence on the breath-ing period. We proposed a simple quality assurance (QA) process for the tracking accuracy of a gimbal mechanism system using a light field and web camera. Our method can assess the tracking accuracy using a light field without irradiation and clearly visualize distributions like film dosimetry. © 2016 The Authors.

  4. Three Small-Receptive-Field Ganglion Cells in the Mouse Retina Are Distinctly Tuned to Size, Speed, and Object Motion

    PubMed Central

    Jacoby, Jason

    2017-01-01

    Retinal ganglion cells (RGCs) are frequently divided into functional types by their ability to extract and relay specific features from a visual scene, such as the capacity to discern local or global motion, direction of motion, stimulus orientation, contrast or uniformity, or the presence of large or small objects. Here we introduce three previously uncharacterized, nondirection-selective ON–OFF RGC types that represent a distinct set of feature detectors in the mouse retina. The three high-definition (HD) RGCs possess small receptive-field centers and strong surround suppression. They respond selectively to objects of specific sizes, speeds, and types of motion. We present comprehensive morphological characterization of the HD RGCs and physiological recordings of their light responses, receptive-field size and structure, and synaptic mechanisms of surround suppression. We also explore the similarities and differences between the HD RGCs and a well characterized RGC with a comparably small receptive field, the local edge detector, in response to moving objects and textures. We model populations of each RGC type to study how they differ in their performance tracking a moving object. These results, besides introducing three new RGC types that together constitute a substantial fraction of mouse RGCs, provide insights into the role of different circuits in shaping RGC receptive fields and establish a foundation for continued study of the mechanisms of surround suppression and the neural basis of motion detection. SIGNIFICANCE STATEMENT The output cells of the retina, retinal ganglion cells (RGCs), are a diverse group of ∼40 distinct neuron types that are often assigned “feature detection” profiles based on the specific aspects of the visual scene to which they respond. Here we describe, for the first time, morphological and physiological characterization of three new RGC types in the mouse retina, substantially augmenting our understanding of feature selectivity. Experiments and modeling show that while these three “high-definition” RGCs share certain receptive-field properties, they also have distinct tuning to the size, speed, and type of motion on the retina, enabling them to occupy different niches in stimulus space. PMID:28100743

  5. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Influence of moving visual environment on sit-to-stand kinematics in children and adults.

    PubMed

    Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A

    2009-08-01

    The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.

  7. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  8. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  9. Crawling and walking infants see the world differently

    PubMed Central

    Kretch, Kari S.; Franchak, John M.; Adolph, Karen E.

    2013-01-01

    How does visual experience change over development? To investigate changes in visual input over the developmental transition from crawling to walking, thirty 13-month-olds crawled or walked down a straight path wearing a head-mounted eye-tracker that recorded gaze direction and head-centered field of view. Thirteen additional infants wore a motion-tracker that recorded head orientation. Compared with walkers, crawlers’ field of view contained less walls and more floor. Walkers directed gaze straight ahead at caregivers, whereas crawlers looked down at the floor. Crawlers obtained visual information about targets at higher elevations—caregivers and toys—by craning their heads upward and sitting up to bring the room into view. Findings indicate that visual experiences are intimately tied to infants’ posture. PMID:24341362

  10. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  11. The cortical basis of true memory and false memory for motion.

    PubMed

    Karanian, Jessica M; Slotnick, Scott D

    2014-02-01

    Behavioral evidence indicates that false memory, like true memory, can be rich in sensory detail. By contrast, there is fMRI evidence that true memory for visual information produces greater activity in earlier visual regions than false memory, which suggests true memory is associated with greater sensory detail. However, false memory in previous fMRI paradigms may have lacked sufficient sensory detail to recruit earlier visual processing regions. To investigate this possibility in the present fMRI study, we employed a paradigm that produced feature-specific false memory with a high degree of visual detail. During the encoding phase, moving or stationary abstract shapes were presented to the left or right of fixation. During the retrieval phase, shapes from encoding were presented at fixation and participants classified each item as previously "moving" or "stationary" within each visual field. Consistent with previous fMRI findings, true memory but not false memory for motion activated motion processing region MT+, while both true memory and false memory activated later cortical processing regions. In addition, false memory but not true memory for motion activated language processing regions. The present findings indicate that true memory activates earlier visual regions to a greater degree than false memory, even under conditions of detailed retrieval. Thus, the dissociation between previous behavioral findings and fMRI findings do not appear to be task dependent. Future work will be needed to assess whether the same pattern of true memory and false memory activity is observed for different sensory modalities. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Alpha oscillations correlate with the successful inhibition of unattended stimuli.

    PubMed

    Händel, Barbara F; Haarmeier, Thomas; Jensen, Ole

    2011-09-01

    Because the human visual system is continually being bombarded with inputs, it is necessary to have effective mechanisms for filtering out irrelevant information. This is partly achieved by the allocation of attention, allowing the visual system to process relevant input while blocking out irrelevant input. What is the physiological substrate of attentional allocation? It has been proposed that alpha activity reflects functional inhibition. Here we asked if inhibition by alpha oscillations has behavioral consequences for suppressing the perception of unattended input. To this end, we investigated the influence of alpha activity on motion processing in two attentional conditions using magneto-encephalography. The visual stimuli used consisted of two random-dot kinematograms presented simultaneously to the left and right visual hemifields. Subjects were cued to covertly attend the left or right kinematogram. After 1.5 sec, a second cue tested whether subjects could report the direction of coherent motion in the attended (80%) or unattended hemifield (20%). Occipital alpha power was higher contralateral to the unattended side than to the attended side, thus suggesting inhibition of the unattended hemifield. Our key finding is that this alpha lateralization in the 20% invalidly cued trials did correlate with the perception of motion direction: Subjects with pronounced alpha lateralization were worse at detecting motion direction in the unattended hemifield. In contrast, lateralization did not correlate with visual discrimination in the attended visual hemifield. Our findings emphasize the suppressive nature of alpha oscillations and suggest that processing of inputs outside the field of attention is weakened by means of increased alpha activity.

  13. The influence of naturalistic, directionally non-specific motion on the spatial deployment of visual attention in right-hemispheric stroke.

    PubMed

    Cazzoli, Dario; Hopfner, Simone; Preisig, Basil; Zito, Giuseppe; Vanbellingen, Tim; Jäger, Michael; Nef, Tobias; Mosimann, Urs; Bohlhalter, Stephan; Müri, René M; Nyffeler, Thomas

    2016-11-01

    An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: (a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, (b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Experimental investigation of the visual field dependency in the erect and supine positions

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.; Saucer, R. T.

    1972-01-01

    The increasing utilization of simulators in many fields, in addition to aeronautics and space, requires the efficient use of these devices. It seemed that personnel highly influenced by the visual scene would make desirable subjects, particularly for those simulators without sufficient motion cues. In order to evaluate this concept, some measure of the degree of influence of the visual field on the subject in necessary. As part of this undertaking, 37 male and female subjects, including eight test pilots, were tested for their visual field dependency or independency. A version of Witkin's rod and frame apparatus was used for the tests. The results showed that nearly all the test subjects exhibited some degree of field dependency, the degree varying from very high field dependency to nearly zero field dependency in a normal distribution. The results for the test pilots were scattered throughout a range similar to the results for the bulk of male subjects. The few female subjects exhibited a higher field dependency than the male subjects. The male subjects exhibited a greater field dependency in the supine position than in the erect position, whereas the field dependency of the female subjects changed only slightly.

  15. Decoding complex flow-field patterns in visual working memory.

    PubMed

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Estimation of slipping organ motion by registration with direction-dependent regularization.

    PubMed

    Schmidt-Richberg, Alexander; Werner, René; Handels, Heinz; Ehrhardt, Jan

    2012-01-01

    Accurate estimation of respiratory motion is essential for many applications in medical 4D imaging, for example for radiotherapy of thoracic and abdominal tumors. It is usually done by non-linear registration of image scans at different states of the breathing cycle but without further modeling of specific physiological motion properties. In this context, the accurate computation of respiration-driven lung motion is especially challenging because this organ is sliding along the surrounding tissue during the breathing cycle, leading to discontinuities in the motion field. Without considering this property in the registration model, common intensity-based algorithms cause incorrect estimation along the object boundaries. In this paper, we present a model for incorporating slipping motion in image registration. Extending the common diffusion registration by distinguishing between normal- and tangential-directed motion, we are able to estimate slipping motion at the organ boundaries while preventing gaps and ensuring smooth motion fields inside and outside. We further present an algorithm for a fully automatic detection of discontinuities in the motion field, which does not rely on a prior segmentation of the organ. We evaluate the approach for the estimation of lung motion based on 23 inspiration/expiration pairs of thoracic CT images. The results show a visually more plausible motion estimation. Moreover, the target registration error is quantified using manually defined landmarks and a significant improvement over the standard diffusion regularization is shown. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Feature-based attentional modulations in the absence of direct visual stimulation.

    PubMed

    Serences, John T; Boynton, Geoffrey M

    2007-07-19

    When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.

  18. People can understand descriptions of motion without activating visual motion brain regions

    PubMed Central

    Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina

    2013-01-01

    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592

  19. Neural Action Fields for Optic Flow Based Navigation: A Simulation Study of the Fly Lobula Plate Network

    PubMed Central

    Borst, Alexander; Weber, Franz

    2011-01-01

    Optic flow based navigation is a fundamental way of visual course control described in many different species including man. In the fly, an essential part of optic flow analysis is performed in the lobula plate, a retinotopic map of motion in the environment. There, the so-called lobula plate tangential cells possess large receptive fields with different preferred directions in different parts of the visual field. Previous studies demonstrated an extensive connectivity between different tangential cells, providing, in principle, the structural basis for their large and complex receptive fields. We present a network simulation of the tangential cells, comprising most of the neurons studied so far (22 on each hemisphere) with all the known connectivity between them. On their dendrite, model neurons receive input from a retinotopic array of Reichardt-type motion detectors. Model neurons exhibit receptive fields much like their natural counterparts, demonstrating that the connectivity between the lobula plate tangential cells indeed can account for their complex receptive field structure. We describe the tuning of a model neuron to particular types of ego-motion (rotation as well as translation around/along a given body axis) by its ‘action field’. As we show for model neurons of the vertical system (VS-cells), each of them displays a different type of action field, i.e., responds maximally when the fly is rotating around a particular body axis. However, the tuning width of the rotational action fields is relatively broad, comparable to the one with dendritic input only. The additional intra-lobula-plate connectivity mainly reduces their translational action field amplitude, i.e., their sensitivity to translational movements along any body axis of the fly. PMID:21305019

  20. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  1. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography

    NASA Astrophysics Data System (ADS)

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-01

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H2+ , the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  2. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography.

    PubMed

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-30

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H_{2}^{+}, the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  3. Determinants of motion response anisotropies in human early visual cortex: the role of configuration and eccentricity.

    PubMed

    Maloney, Ryan T; Watson, Tamara L; Clifford, Colin W G

    2014-10-15

    Anisotropies in the cortical representation of various stimulus parameters can reveal the fundamental mechanisms by which sensory properties are analysed and coded by the brain. One example is the preference for motion radial to the point of fixation (i.e. centripetal or centrifugal) exhibited in mammalian visual cortex. In two experiments, this study used functional magnetic resonance imaging (fMRI) to explore the determinants of these radial biases for motion in functionally-defined areas of human early visual cortex, and in particular their dependence upon eccentricity which has been indicated in recent reports. In one experiment, the cortical response to wide-field random dot kinematograms forming 16 different complex motion patterns (including centrifugal, centripetal, rotational and spiral motion) was measured. The response was analysed according to preferred eccentricity within four different eccentricity ranges. Response anisotropies were characterised by enhanced activity for centripetal or centrifugal patterns that changed systematically with eccentricity in visual areas V1-V3 and hV4 (but not V3A/B or V5/MT+). Responses evolved from a preference for centrifugal over centripetal patterns close to the fovea, to a preference for centripetal over centrifugal at the most peripheral region stimulated, in agreement with previous work. These effects were strongest in V2 and V3. In a second experiment, the stimuli were restricted to within narrow annuli either close to the fovea (0.75-1.88°) or further in the periphery (4.82-6.28°), in a way that preserved the local motion information available in the first experiment. In this configuration a preference for radial motion (centripetal or centrifugal) persisted but the dependence upon eccentricity disappeared. Again this was clearest in V2 and V3. A novel interpretation of the dependence upon eccentricity of motion anisotropies in early visual cortex is offered that takes into account the spatiotemporal "predictability" of the moving pattern. Such stimulus predictability, and its relationship to models of predictive coding, has found considerable support in recent years in accounting for a number of other perceptual and neural phenomena. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Validation of an inertial measurement unit for the measurement of jump count and height.

    PubMed

    MacDonald, Kerry; Bahr, Roald; Baltich, Jennifer; Whittaker, Jackie L; Meeuwisse, Willem H

    2017-05-01

    To validate the use of an inertial measurement unit (IMU) for the collection of total jump count and assess the validity of an IMU for the measurement of jump height against 3-D motion analysis. Cross sectional validation study. 3D motion-capture laboratory and field based settings. Thirteen elite adolescent volleyball players. Participants performed structured drills, played a 4 set volleyball match and performed twelve counter movement jumps. Jump counts from structured drills and match play were validated against visual count from recorded video. Jump height during the counter movement jumps was validated against concurrent 3-D motion-capture data. The IMU device captured more total jumps (1032) than visual inspection (977) during match play. During structured practice, device jump count sensitivity was strong (96.8%) while specificity was perfect (100%). The IMU underestimated jump height compared to 3D motion-capture with mean differences for maximal and submaximal jumps of 2.5 cm (95%CI: 1.3 to 3.8) and 4.1 cm (3.1-5.1), respectively. The IMU offers a valid measuring tool for jump count. Although the IMU underestimates maximal and submaximal jump height, our findings demonstrate its practical utility for field-based measurement of jump load. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. “Global” visual training and extent of transfer in amblyopic macaque monkeys

    PubMed Central

    Kiorpes, Lynne; Mangal, Paul

    2015-01-01

    Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868

  6. Hummingbirds control hovering flight by stabilizing visual motion.

    PubMed

    Goller, Benjamin; Altshuler, Douglas L

    2014-12-23

    Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.

  7. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.

    PubMed

    McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-07-15

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.

  8. Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.

    PubMed

    Von Mühlenen, Adrian; Müller, Hermann J

    1999-07-01

    In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).

  9. Ball Rolling on a Turntable: Analog for Charged Particle Dynamics.

    ERIC Educational Resources Information Center

    Burns, Joseph A.

    1981-01-01

    Describes how a ball's motion duplicates that of a charged particle moving through a magnetic field and thereby allows students to visualize directly many aspects of charged particle dynamics otherwise not accessible to them. (Author/JN)

  10. Characteristics of dynamic processing in the visual field of patients with age-related maculopathy

    PubMed Central

    Eisenbarth, Werner; MacKeben, Manfred; Poggel, Dorothe A.

    2007-01-01

    Purpose To investigate the characteristics of dynamic processing in the visual field of patients with age-related maculopathy (ARM) by measuring motion sensitivity, double-pulse resolution (DPR), and critical flicker fusion. Methods Fourteen subjects with ARM (18 eyes), 14 age-matched controls (19 eyes), and 7 young controls (8 eyes) served as subjects. Motion contrast thresholds were determined by a four-alternative forced-choice (4 afc) staircase procedure with a modification by Kernbach for presenting a plaid (size = 3.8°) moving within a stationary spatial and temporal Gaussian envelope in one of four directions. Measurements were performed on the horizontal meridian at 10°, 20°, 30°, 40°, and 60° eccentricity. DPR was defined as the minimal temporal gap detectable by the subject using a 9-fold interleaved adaptive procedure, with stimuli positioned on concentric rings at 5°, 10°, and 20° eccentricity on the principal and oblique meridians. Critical flicker fusion thresholds (CFF) and the Lanthony D-15 color vision test were applied foveally, and the subjects were free to use their fovea or whatever retinal area they needed to use instead, due to their retinal lesions caused by ARM. All measurements were performed under photopic conditions. Results Motion contrast sensitivity in subjects with ARM was pronouncedly reduced (0.23–0.66 log units, p < 0.01), not only in the macula but in a region up to 20° eccentricity. In the two control groups, motion contrast sensitivity systematically declined with retinal eccentricity (0.009–0.032 log units/degree) and with age (0.01 log units/year). Double-pulse thresholds in healthy subjects were approximately constant in the central visual field and increased outside a radius of 10° (1.73 ms/degree). DPR thresholds were elevated in subjects with ARM (by 23–32 ms, p < 0.01) up to 20° eccentricity, and their foveal CFFs were increased by 5.5 Hz or 14% (p < 0.01) as compared with age-matched controls. Conclusions Dynamic processing properties in subjects with ARM are severely impaired in the central visual field up to 20° eccentricity, which is clearly beyond the borders of the macula. PMID:17882447

  11. Visuomotor Transformation in the Fly Gaze Stabilization System

    PubMed Central

    Huston, Stephen J; Krapp, Holger G

    2008-01-01

    For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information. PMID:18651791

  12. Chromatic and achromatic visual fields in relation to choroidal thickness in patients with high myopia: A pilot study.

    PubMed

    García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M

    2018-02-01

    To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  13. Smart unattended sensor networks with scene understanding capabilities

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2006-05-01

    Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.

  14. Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.

    PubMed

    Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M

    2010-01-01

    Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.

  15. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

    PubMed Central

    Niehorster, Diederick C.

    2017-01-01

    How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272

  16. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling

    NASA Astrophysics Data System (ADS)

    Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing

    2016-02-01

    A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.

  17. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling.

    PubMed

    Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing

    2016-02-07

    A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.

  18. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling

    PubMed Central

    Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing

    2016-01-01

    A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496

  19. Postural time-to-contact as a precursor of visually induced motion sickness.

    PubMed

    Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A

    2018-06-01

    The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.

  20. Evidence for auditory-visual processing specific to biological motion.

    PubMed

    Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F

    2012-01-01

    Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.

  1. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  2. Visualization of the collective vortex-like motions in liquid argon and water: Molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Anikeenko, A. V.; Malenkov, G. G.; Naberukhin, Yu. I.

    2018-03-01

    We propose a new measure of collectivity of molecular motion in the liquid: the average vector of displacement of the particles, ⟨ΔR⟩, which initially have been localized within a sphere of radius Rsph and then have executed the diffusive motion during a time interval Δt. The more correlated the motion of the particles is, the longer will be the vector ⟨ΔR⟩. We visualize the picture of collective motions in molecular dynamics (MD) models of liquids by constructing the ⟨ΔR⟩ vectors and pinning them to the sites of the uniform grid which divides each of the edges of the model box into equal parts. MD models of liquid argon and water have been studied by this method. Qualitatively, the patterns of ⟨ΔR⟩ vectors are similar for these two liquids but differ in minor details. The most important result of our research is the revealing of the aggregates of ⟨ΔR⟩ vectors which have the form of extended flows which sometimes look like the parts of vortices. These vortex-like clusters of ⟨ΔR⟩ vectors have the mesoscopic size (of the order of 10 nm) and persist for tens of picoseconds. Dependence of the ⟨ΔR⟩ vector field on parameters Rsph, Δt, and on the model size has been investigated. This field in the models of liquids differs essentially from that in a random-walk model.

  3. Tuning self-motion perception in virtual reality with visual illusions.

    PubMed

    Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus

    2012-07-01

    Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.

  4. Bumblebees measure optic flow for position and speed control flexibly within the frontal visual field.

    PubMed

    Linander, Nellie; Dacke, Marie; Baird, Emily

    2015-04-01

    When flying through narrow spaces, insects control their position by balancing the magnitude of apparent image motion (optic flow) experienced in each eye and their speed by holding this value about a desired set point. Previously, it has been shown that when bumblebees encounter sudden changes in the proximity to nearby surfaces - as indicated by a change in the magnitude of optic flow on each side of the visual field - they adjust their flight speed well before the change, suggesting that they measure optic flow for speed control at low visual angles in the frontal visual field. Here, we investigated the effect that sudden changes in the magnitude of translational optic flow have on both position and speed control in bumblebees if these changes are asymmetrical; that is, if they occur only on one side of the visual field. Our results reveal that the visual region over which bumblebees respond to optic flow cues for flight control is not dictated by a set viewing angle. Instead, bumblebees appear to use the maximum magnitude of translational optic flow experienced in the frontal visual field. This strategy ensures that bumblebees use the translational optic flow generated by the nearest obstacles - that is, those with which they have the highest risk of colliding - to control flight. © 2015. Published by The Company of Biologists Ltd.

  5. A synchronous strobed laser light sheet for helicopter model rotor flow visualization

    NASA Technical Reports Server (NTRS)

    Leighty, Bradley D.; Rhodes, David B.; Jones, Stephen B.; Franke, John M.

    1990-01-01

    A synchronous, strobed laser light sheet has been developed for use in flow visualization of a helicopter rotor model. The light sheet strobe circuit included selectable blade position, strobe duration, and multiple pulses per revolution for rotors having 2 to 9 blades. The flow was seeded with propylene glycol. Between runs, a calibration grid board was placed in the plane of the laser sheet and recorded with the video camera at the position used to record the flow field. A slip-sync mode permitted slow motion visualization of the flow field over complete rotations of the rotor. The system was used to make two-dimensional flow field cuts of a four-bladed rotor operating at advance ratio of 0.37 at wind tunnel speeds up to 79.25 meters per second (260 feet per second).

  6. Modelling effects on grid cells of sensory input during self‐motion

    PubMed Central

    Raudies, Florian; Hinman, James R.

    2016-01-01

    Abstract The neural coding of spatial location for memory function may involve grid cells in the medial entorhinal cortex, but the mechanism of generating the spatial responses of grid cells remains unclear. This review describes some current theories and experimental data concerning the role of sensory input in generating the regular spatial firing patterns of grid cells, and changes in grid cell firing fields with movement of environmental barriers. As described here, the influence of visual features on spatial firing could involve either computations of self‐motion based on optic flow, or computations of absolute position based on the angle and distance of static visual cues. Due to anatomical selectivity of retinotopic processing, the sensory features on the walls of an environment may have a stronger effect on ventral grid cells that have wider spaced firing fields, whereas the sensory features on the ground plane may influence the firing of dorsal grid cells with narrower spacing between firing fields. These sensory influences could contribute to the potential functional role of grid cells in guiding goal‐directed navigation. PMID:27094096

  7. Binding of motion and colour is early and automatic.

    PubMed

    Blaser, Erik; Papathomas, Thomas; Vidnyánszky, Zoltán

    2005-04-01

    At what stages of the human visual hierarchy different features are bound together, and whether this binding requires attention, is still highly debated. We used a colour-contingent motion after-effect (CCMAE) to study the binding of colour and motion signals. The logic of our approach was as follows: if CCMAEs can be evoked by targeted adaptation of early motion processing stages, without allowing for feedback from higher motion integration stages, then this would support our hypothesis that colour and motion are bound automatically on the basis of spatiotemporally local information. Our results show for the first time that CCMAE's can be evoked by adaptation to a locally paired opposite-motion dot display, a stimulus that, importantly, is known to trigger direction-specific responses in the primary visual cortex yet results in strong inhibition of the directional responses in area MT of macaques as well as in area MT+ in humans and, indeed, is perceived only as motionless flicker. The magnitude of the CCMAE in the locally paired condition was not significantly different from control conditions where the different directions were spatiotemporally separated (i.e. not locally paired) and therefore perceived as two moving fields. These findings provide evidence that adaptation at an early, local motion stage, and only adaptation at this stage, underlies this CCMAE, which in turn implies that spatiotemporally coincident colour and motion signals are bound automatically, most probably as early as cortical area V1, even when the association between colour and motion is perceptually inaccessible.

  8. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain.

    PubMed

    Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M

    2017-02-19

    Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  9. PERCEPTION AND TELEVISION--PHYSIOLOGICAL FACTORS OF TELEVISION VIEWING.

    ERIC Educational Resources Information Center

    GUBA, EGON; AND OTHERS

    AN EXPERIMENTAL SYSTEM WAS DEVELOPED FOR RECORDING EYE-MOVEMENT DATA. RAW DATA WERE IN THE FORM OF MOTION PICTURES TAKEN OF THE MONITOR OF A CLOSED LOOP TELEVISION SYSTEM. A TELEVISION CAMERA WAS MOUNTED ON THE SUBJECTS' FIELD OF VIEW. THE EYE MARKER APPEARED AS A SMALL SPOT OF LIGHT AND INDICATED THE POINT IN THE VISUAL FIELD AT WHICH THE SUBJECT…

  10. 14 CFR 141.41 - Flight simulators, flight training devices, and training aids.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... freedom of motion system; (4) Use a visual system that provides at least a 45-degree horizontal field of view and a 30-degree vertical field of view simultaneously for each pilot; and (5) Have been evaluated... aircraft, or set of aircraft, in an open flight deck area or in an enclosed cockpit, including the hardware...

  11. Causal evidence for retina dependent and independent visual motion computations in mouse cortex

    PubMed Central

    Hillier, Daniel; Fiscella, Michele; Drinnenberg, Antonia; Trenholm, Stuart; Rompani, Santiago B.; Raics, Zoltan; Katona, Gergely; Juettner, Josephine; Hierlemann, Andreas; Rozsa, Balazs; Roska, Botond

    2017-01-01

    How neuronal computations in the sensory periphery contribute to computations in the cortex is not well understood. We examined this question in the context of visual-motion processing in the retina and primary visual cortex (V1) of mice. We disrupted retinal direction selectivity – either exclusively along the horizontal axis using FRMD7 mutants or along all directions by ablating starburst amacrine cells – and monitored neuronal activity in layer 2/3 of V1 during stimulation with visual motion. In control mice, we found an overrepresentation of cortical cells preferring posterior visual motion, the dominant motion direction an animal experiences when it moves forward. In mice with disrupted retinal direction selectivity, the overrepresentation of posterior-motion-preferring cortical cells disappeared, and their response at higher stimulus speeds was reduced. This work reveals the existence of two functionally distinct, sensory-periphery-dependent and -independent computations of visual motion in the cortex. PMID:28530661

  12. Interpretation of the function of the striate cortex

    NASA Astrophysics Data System (ADS)

    Garner, Bernardette M.; Paplinski, Andrew P.

    2000-04-01

    Biological neural networks do not require retraining every time objects move in the visual field. Conventional computer neural networks do not share this shift-invariance. The brain compensates for movements in the head, body, eyes and objects by allowing the sensory data to be tracked across the visual field. The neurons in the striate cortex respond to objects moving across the field of vision as is seen in many experiments. It is proposed, that the neurons in the striate cortex allow continuous angle changes needed to compensate for changes in orientation of the head, eyes and the motion of objects in the field of vision. It is hypothesized that the neurons in the striate cortex form a system that allows for the translation, some rotation and scaling of objects and provides a continuity of objects as they move relative to other objects. The neurons in the striate cortex respond to features which are fundamental to sight, such as orientation of lines, direction of motion, color and contrast. The neurons that respond to these features are arranged on the cortex in a way that depends on the features they are responding to and on the area of the retina from which they receive their inputs.

  13. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  14. A visually guided collision warning system with a neuromorphic architecture.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2008-12-01

    We have designed a visually guided collision warning system with a neuromorphic architecture, employing an algorithm inspired by the visual nervous system of locusts. The system was implemented with mixed analog-digital integrated circuits consisting of an analog resistive network and field-programmable gate array (FPGA) circuits. The resistive network processes the interaction between the laterally spreading excitatory and inhibitory signals instantaneously, which is essential for real-time computation of collision avoidance with a low power consumption and a compact hardware. The system responded selectively to approaching objects of simulated movie images at close range. The system was, however, confronted with serious noise problems due to the vibratory ego-motion, when it was installed in a mobile miniature car. To overcome this problem, we developed the algorithm, which is also installable in FPGA circuits, in order for the system to respond robustly during the ego-motion.

  15. Thinking in z-space: flatness and spatial narrativity

    NASA Astrophysics Data System (ADS)

    Zone, Ray

    2012-03-01

    Now that digital technology has accessed the Z-space in cinema, narrative artistry is at a loss. Motion picture professionals no longer can readily resort to familiar tools. A new language and new linguistics for Z-axis storytelling are necessary. After first examining the roots of monocular thinking in painting, prior modes of visual narrative in twodimensional cinema obviating true binocular stereopsis can be explored, particularly montage, camera motion and depth of field, with historic examples. Special attention is paid to the manner in which monocular cues for depth have been exploited to infer depth on a planar screen. Both the artistic potential and visual limitations of actual stereoscopic depth as a filmmaking language are interrogated. After an examination of the historic basis of monocular thinking in visual culture, a context for artistic exploration of the use of the z-axis as a heightened means of creating dramatic and emotional impact upon the viewer is illustrated.

  16. A novel role for visual perspective cues in the neural computation of depth.

    PubMed

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  17. The sensory components of high-capacity iconic memory and visual working memory.

    PubMed

    Bradley, Claire; Pearson, Joel

    2012-01-01

    EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.

  18. Vapour pressure and adiabatic cooling from champagne: slow-motion visualization of gas thermodynamics

    NASA Astrophysics Data System (ADS)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2012-09-01

    We present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects as well as adiabatic cooling observed upon opening a bottle of champagne.

  19. Motion/visual cueing requirements for vortex encounters during simulated transport visual approach and landing

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Bowles, R. L.

    1983-01-01

    This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.

  20. Cognitive processes involved in smooth pursuit eye movements: behavioral evidence, neural substrate and clinical correlation

    PubMed Central

    Fukushima, Kikuro; Fukushima, Junko; Warabi, Tateo; Barnes, Graham R.

    2013-01-01

    Smooth-pursuit eye movements allow primates to track moving objects. Efficient pursuit requires appropriate target selection and predictive compensation for inherent processing delays. Prediction depends on expectation of future object motion, storage of motion information and use of extra-retinal mechanisms in addition to visual feedback. We present behavioral evidence of how cognitive processes are involved in predictive pursuit in normal humans and then describe neuronal responses in monkeys and behavioral responses in patients using a new technique to test these cognitive controls. The new technique examines the neural substrate of working memory and movement preparation for predictive pursuit by using a memory-based task in macaque monkeys trained to pursue (go) or not pursue (no-go) according to a go/no-go cue, in a direction based on memory of a previously presented visual motion display. Single-unit task-related neuronal activity was examined in medial superior temporal cortex (MST), supplementary eye fields (SEF), caudal frontal eye fields (FEF), cerebellar dorsal vermis lobules VI–VII, caudal fastigial nuclei (cFN), and floccular region. Neuronal activity reflecting working memory of visual motion direction and go/no-go selection was found predominantly in SEF, cerebellar dorsal vermis and cFN, whereas movement preparation related signals were found predominantly in caudal FEF and the same cerebellar areas. Chemical inactivation produced effects consistent with differences in signals represented in each area. When applied to patients with Parkinson's disease (PD), the task revealed deficits in movement preparation but not working memory. In contrast, patients with frontal cortical or cerebellar dysfunction had high error rates, suggesting impaired working memory. We show how neuronal activity may be explained by models of retinal and extra-retinal interaction in target selection and predictive control and thus aid understanding of underlying pathophysiology. PMID:23515488

  1. Conveying Looming with a Localized Tactile Cue

    DTIC Science & Technology

    2015-04-01

    leaning and reflexive head righting required at different speeds of linear or angular motion, the angle of contact of the foot to the substrate (e.g...approach information (e.g., relative distance updates) prior to actual contact , as has been reported for visual and auditory displays. A few studies have...Jacobs, 2013). Cancar et al. asked 12 subjects to estimate time-to- contact of a radially-expanding tactile or visual flow field representing a

  2. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  3. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  4. Perception of Visual Speed While Moving

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Gigone, Krista; Scott, Rebecca

    2005-01-01

    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…

  5. Anticipatory Smooth Eye Movements in Autism Spectrum Disorder

    PubMed Central

    Aitkin, Cordelia D.; Santos, Elio M.; Kowler, Eileen

    2013-01-01

    Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations. PMID:24376667

  6. Anticipatory smooth eye movements in autism spectrum disorder.

    PubMed

    Aitkin, Cordelia D; Santos, Elio M; Kowler, Eileen

    2013-01-01

    Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.

  7. Motion-artifact-robust, polarization-resolved second-harmonic-generation microscopy based on rapid polarization switching with electro-optic Pockells cell and its application to in vivo visualization of collagen fiber orientation in human facial skin

    PubMed Central

    Tanaka, Yuji; Hase, Eiji; Fukushima, Shuichiro; Ogura, Yuki; Yamashita, Toyonobu; Hirao, Tetsuji; Araki, Tsutomu; Yasui, Takeshi

    2014-01-01

    Polarization-resolved second-harmonic-generation (PR-SHG) microscopy is a powerful tool for investigating collagen fiber orientation quantitatively with low invasiveness. However, the waiting time for the mechanical polarization rotation makes it too sensitive to motion artifacts and hence has hampered its use in various applications in vivo. In the work described in this article, we constructed a motion-artifact-robust, PR-SHG microscope based on rapid polarization switching at every pixel with an electro-optic Pockells cell (PC) in synchronization with step-wise raster scanning of the focus spot and alternate data acquisition of a vertical-polarization-resolved SHG signal and a horizontal-polarization-resolved one. The constructed PC-based PR-SHG microscope enabled us to visualize orientation mapping of dermal collagen fiber in human facial skin in vivo without the influence of motion artifacts. Furthermore, it implied the location and/or age dependence of the collagen fiber orientation in human facial skin. The robustness to motion artifacts in the collagen orientation measurement will expand the application scope of SHG microscopy in dermatology and collagen-related fields. PMID:24761292

  8. WiseView: Visualizing motion and variability of faint WISE sources

    NASA Astrophysics Data System (ADS)

    Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume

    2018-06-01

    WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.

  9. Visually induced self-motion sensation adapts rapidly to left-right reversal of vision

    NASA Technical Reports Server (NTRS)

    Oman, C. M.; Bock, O. L.

    1981-01-01

    Three experiments were conducted using 15 adult volunteers with no overt oculomotor or vestibular disorders. In all experiments, left-right vision reversal was achieved using prism goggles, which permitted a binocular field of vision subtending approximately 45 deg horizontally and 28 deg vertically. In all experiments, circularvection (CV) was tested before and immediately after a period of exposure to reversed vision. After one to three hours of active movement while wearing vision-reversing goggles, 10 of 15 (stationary) human subjects viewing a moving stripe display experienced a self-rotation illusion in the same direction as seen stripe motion, rather than in the opposite (normal) direction, demonstrating that the central neural pathways that process visual self-rotation cues can undergo rapid adaptive modification.

  10. Predator pursuit strategies: how do falcons and hawks chase prey?

    NASA Astrophysics Data System (ADS)

    Kane, Suzanne Amador; Zamani, Marjon; Fulton, Andrew; Rosenthal, Lee

    2014-03-01

    This study reports on experiments on falcons, goshawks and red-tailed hawks wearing miniature videocameras mounted on their backs or heads while pursuing flying or ground-based prey. Videos of hunts recorded by the raptors were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data then were interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. Results for goshawks and red-tailed hawks were analyzed for a comparative study of how pursuits of ground-based prey by accipeters and buteos differ from those used by falcons chasing flying prey. These results should prove relevant for understanding the coevolution of pursuit and evasion, as well as the development of computer models of predation on flocks,and the integration of sensory and locomotion systems in biomimetic robots.

  11. Feature-based interference from unattended visual field during attentional tracking in younger and older adults.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2011-02-01

    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.

  12. [Atypical optic neuritis in systemic lupus erythematosus (SLE)].

    PubMed

    Eckstein, A; Kötter, I; Wilhelm, H

    1995-11-01

    A 67-year-old woman experienced acute unilateral visual loss accompanied by pain with eye movements. There was a marked relative afferent pupillary defect and a nerve fiber bundle defect in the upper half of the visual field. Optic discs were normal. After 4 days vision worsened to motion detection and only a temporal island was left in the visual field. The optic disc margin was blurred. Since thirty years she had been suffering from renal insufficiency. Immunoserologic examination revealed elevated ANA and DS-DNA antibody titers. An optic neuritis in systemic lupus erythematosus was diagnosed, which is called atopic, because of its association to a systemic disease and the old age of the patient. The patient was treated with 100 mg prednisolone/day, slowly tapered. Within 6 weeks visual acuity improved to 0.6 and visual field normalized except for a small nerve fiber bundle defect. Autoimmune optic neuritis often responds to treatment with corticosteroids. Early onset of treatment is important. Immunopathologic examinations are an important diagnostic tool in atopic optic neuritis. Their results may even have consequences for the treatment of the underlying disease.

  13. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  14. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  15. Neural Representation of Motion-In-Depth in Area MT

    PubMed Central

    Sanada, Takahisa M.

    2014-01-01

    Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481

  16. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  17. EFFECTS OF STROBOSCOPIC VISUAL TRAINING ON VISUAL ATTENTION, MOTION PERCEPTION, AND CATCHING PERFORMANCE.

    PubMed

    Wilkins, Luke; Gray, Rob

    2015-08-01

    It has been shown recently that stroboscopic visual training can improve visual-perceptual abilities, such as central field motion sensitivity and anticipatory timing. Such training should also improve a sports skill that relies on these perceptual abilities, namely ball catching. Thirty athletes (12 women, 18 men; M age=22.5 yr., SD=4.7) were assigned to one of two types of stroboscopic training groups: a variable strobe rate (VSR) group for which the off-time of the glasses was systematically increased (as in previous research) and a constant strobe rate group (CSR) for which the glasses were always set at the shortest off-time. Training involved simple, tennis ball-catching drills (9×20 min.) occurring over a 6-wk. In pre- and post-training, the participants completed a one-handed ball-catching task and the Useful Field of View (UFOV) and the Motion in Depth Sensitivity (MIDS) tests. Since the CSR condition used in the present study has been shown to have no effect on catching performance, it was predicted that the VSR group would show significantly greater improvement pre-post-training. There were no significant differences between the CSR and VSR on any of the tests. However, changes in catching performance (total balls caught) pre-post-training were significantly correlated with changes in scores for the UFOV single-task and MIDS tests. That is, regardless of group, participants whose perceptual-cognitive performance improved in the post-test were significantly more likely to improve their catching performance. This suggests that the perceptual changes observed in previous stroboscopic training studies may be linked to changes in sports skill performance.

  18. A stingless bee can use visual odometry to estimate both height and distance.

    PubMed

    Eckles, M A; Roubik, D W; Nieh, J C

    2012-09-15

    Bees move and forage within three dimensions and rely heavily on vision for navigation. The use of vision-based odometry has been studied extensively in horizontal distance measurement, but not vertical distance measurement. The honey bee Apis mellifera and the stingless bee Melipona seminigra measure distance visually using optic flow-movement of images as they pass across the retina. The honey bees gauge height using image motion in the ventral visual field. The stingless bees forage at different tropical forest canopy levels, ranging up to 40 m at our site. Thus, estimating height would be advantageous. We provide the first evidence that the stingless bee Melipona panamica utilizes optic flow information to gauge not only distance traveled but also height above ground, by processing information primarily from the lateral visual field. After training bees to forage at a set height in a vertical tunnel lined with black and white stripes, we observed foragers that explored a new tunnel with no feeder. In a new tunnel, bees searched at the same height they were trained to. In a narrower tunnel, bees experienced more image motion and significantly lowered their search height. In a wider tunnel, bees experienced less image motion and searched at significantly greater heights. In a tunnel without optic cues, bees were disoriented and searched at random heights. A horizontal tunnel testing these variables similarly affected foraging, but bees exhibited less precision (greater variance in search positions). Accurately gauging flight height above ground may be crucial for this species and others that compete for resources located at heights ranging from ground level to the high tropical forest canopies.

  19. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  20. Visual processing of rotary motion.

    PubMed

    Werkhoven, P; Koenderink, J J

    1991-01-01

    Local descriptions of velocity fields (e.g., rotation, divergence, and deformation) contain a wealth of information for form perception and ego motion. In spite of this, human psychophysical performance in estimating these entities has not yet been thoroughly examined. In this paper, we report on the visual discrimination of rotary motion. A sequence of image frames is used to elicit an apparent rotation of an annulus, composed of dots in the frontoparallel plane, around a fixation spot at the center of the annulus. Differential angular velocity thresholds are measured as a function of the angular velocity, the diameter of the annulus, the number of dots, the display time per frame, and the number of frames. The results show a U-shaped dependence of angular velocity discrimination on spatial scale, with minimal Weber fractions of 7%. Experiments with a scatter in the distance of the individual dots to the center of rotation demonstrate that angular velocity cannot be assessed directly; perceived angular velocity depends strongly on the distance of the dots relative to the center of rotation. We suggest that the estimation of rotary motion is mediated by local estimations of linear velocity.

  1. Aging and visual 3-D shape recognition from motion.

    PubMed

    Norman, J Farley; Adkins, Olivia C; Dowell, Catherine J; Hoyng, Stevie C; Shain, Lindsey M; Pedersen, Lauren E; Kinnard, Jonathan D; Higginbotham, Alexia J; Gilliam, Ashley N

    2017-11-01

    Two experiments were conducted to evaluate the ability of younger and older adults to recognize 3-D object shape from patterns of optical motion. In Experiment 1, participants were required to identify dotted surfaces that rotated in depth (i.e., surface structure portrayed using the kinetic depth effect). The task difficulty was manipulated by limiting the surface point lifetimes within the stimulus apparent motion sequences. In Experiment 2, the participants identified solid, naturally shaped objects (replicas of bell peppers, Capsicum annuum) that were defined by occlusion boundary contours, patterns of specular highlights, or combined optical patterns containing both boundary contours and specular highlights. Significant and adverse effects of increased age were found in both experiments. Despite the fact that previous research has found that increases in age do not reduce solid shape discrimination, our current results indicated that the same conclusion does not hold for shape identification. We demonstrated that aging results in a reduction in the ability to visually recognize 3-D shape independent of how the 3-D structure is defined (motions of isolated points, deformations of smooth optical fields containing specular highlights, etc.).

  2. A survey of visual preprocessing and shape representation techniques

    NASA Technical Reports Server (NTRS)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  3. Use of nontraditional flight displays for the reduction of central visual overload in the cockpit

    NASA Technical Reports Server (NTRS)

    Weinstein, Lisa F.; Wickens, Christopher D.

    1992-01-01

    The use of nontraditional flight displays to reduce visual overload in the cockpit was investigated in a dual-task paradigm. Three flight displays (central, peripheral, and ecological) were used between subjects for the primary tasks, and the type of secondary task (object identification or motion judgment) and the presentation of the location of the task in the visual field (central or peripheral) were manipulated with groups. The two visual-spatial tasks were time-shared to study the possibility of a compatibility mapping between task type and task location. The ecological display was found to allow for the most efficient time-sharing.

  4. Optic flow-based collision-free strategies: From insects to robots.

    PubMed

    Serres, Julien R; Ruffier, Franck

    2017-09-01

    Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Psychophysical and neuroimaging responses to moving stimuli in a patient with the Riddoch phenomenon due to bilateral visual cortex lesions.

    PubMed

    Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C

    2018-05-09

    Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Invariance of visual operations at the level of receptive fields

    PubMed Central

    Lindeberg, Tony

    2013-01-01

    The brain is able to maintain a stable perception although the visual stimuli vary substantially on the retina due to geometric transformations and lighting variations in the environment. This paper presents a theory for achieving basic invariance properties already at the level of receptive fields. Specifically, the presented framework comprises (i) local scaling transformations caused by objects of different size and at different distances to the observer, (ii) locally linearized image deformations caused by variations in the viewing direction in relation to the object, (iii) locally linearized relative motions between the object and the observer and (iv) local multiplicative intensity transformations caused by illumination variations. The receptive field model can be derived by necessity from symmetry properties of the environment and leads to predictions about receptive field profiles in good agreement with receptive field profiles measured by cell recordings in mammalian vision. Indeed, the receptive field profiles in the retina, LGN and V1 are close to ideal to what is motivated by the idealized requirements. By complementing receptive field measurements with selection mechanisms over the parameters in the receptive field families, it is shown how true invariance of receptive field responses can be obtained under scaling transformations, affine transformations and Galilean transformations. Thereby, the framework provides a mathematically well-founded and biologically plausible model for how basic invariance properties can be achieved already at the level of receptive fields and support invariant recognition of objects and events under variations in viewpoint, retinal size, object motion and illumination. The theory can explain the different shapes of receptive field profiles found in biological vision, which are tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time, from a requirement that the visual system should be invariant to the natural types of image transformations that occur in its environment. PMID:23894283

  7. Spatial and Global Sensory Suppression Mapping Encompassing the Central 10° Field in Anisometropic Amblyopia.

    PubMed

    Li, Jingjing; Li, Jinrong; Chen, Zidong; Liu, Jing; Yuan, Junpeng; Cai, Xiaoxiao; Deng, Daming; Yu, Minbin

    2017-01-01

    We investigate the efficacy of a novel dichoptic mapping paradigm in evaluating visual function of anisometropic amblyopes. Using standard clinical measures of visual function (visual acuity, stereo acuity, Bagolini lenses, and neutral density filters) and a novel quantitative mapping technique, 26 patients with anisometropic amblyopia (mean age = 19.15 ± 4.42 years) were assessed. Two additional psychophysical interocular suppression measurements were tested with dichoptic global motion coherence and binocular phase combination tasks. Luminance reduction was achieved by placing neutral density filters in front of the normal eye. Our study revealed that suppression changes across the central 10° visual field by mean luminance modulation in amblyopes as well as normal controls. Using simulation and an elimination of interocular suppression, we identified a novel method to effectively reflect the distribution of suppression in anisometropic amblyopia. Additionally, the new quantitative mapping technique was in good agreement with conventional clinical measures, such as interocular acuity difference (P < 0.001) and stereo acuity (P = 0.005). There was a good consistency between the results of interocular suppression with dichoptic mapping paradigm and the results of the other two psychophysical methods (suppression mapping versus binocular phase combination, P < 0.001; suppression mapping versus global motion coherence, P = 0.005). The dichoptic suppression mapping technique is an effective method to represent impaired visual function in patients with anisometropic amblyopia. It offers a potential in "micro-"antisuppression mapping tests and therapies for amblyopia.

  8. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  9. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  10. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  11. Flight Simulation.

    DTIC Science & Technology

    1986-09-01

    TECHNICAL EVALUATION REPORT OF THE SYMPOSIUM ON "FLIGHT SIMULATION" A. M. Cook. NASA -Ames Research Center 1. INTRODUCIL𔃻N This report evaluates the 67th...John C. Ousterberry* NASA Ames Research Center Moffett Field, California 94035, U.S.A. SUMMARY Early AGARD papers on manned flight simulation...and developffent simulators. VISUAL AND MOTION CUEING IN HELICOPTER SIMULATION Nichard S. Bray NASA Ames Research Center Moffett Field, California

  12. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  13. A Unifying Motif for Spatial and Directional Surround Suppression.

    PubMed

    Liu, Liu D; Miller, Kenneth D; Pack, Christopher C

    2018-01-24

    In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.

  14. A novel role for visual perspective cues in the neural computation of depth

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.

    2014-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667

  15. Aging effect in pattern, motion and cognitive visual evoked potentials.

    PubMed

    Kuba, Miroslav; Kremláček, Jan; Langrová, Jana; Kubová, Zuzana; Szanyi, Jana; Vít, František

    2012-06-01

    An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Computational cameras for moving iris recognition

    NASA Astrophysics Data System (ADS)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  17. Micro-calibration of space and motion by photoreceptors synchronized in parallel with cortical oscillations: A unified theory of visual perception.

    PubMed

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Jensen, Mike

    2018-01-01

    A fundamental function of the visual system is detecting motion, yet visual perception is poorly understood. Current research has determined that the retina and ganglion cells elicit responses for motion detection; however, the underlying mechanism for this is incompletely understood. Previously we proposed that retinogeniculo-cortical oscillations and photoreceptors work in parallel to process vision. Here we propose that motion could also be processed within the retina, and not in the brain as current theory suggests. In this paper, we discuss: 1) internal neural space formation; 2) primary, secondary, and tertiary roles of vision; 3) gamma as the secondary role; and 4) synchronization and coherence. Movement within the external field is instantly detected by primary processing within the space formed by the retina, providing a unified view of the world from an internal point of view. Our new theory begins to answer questions about: 1) perception of space, erect images, and motion, 2) purpose of lateral inhibition, 3) speed of visual perception, and 4) how peripheral color vision occurs without a large population of cones located peripherally in the retina. We explain that strong oscillatory activity influences on brain activity and is necessary for: 1) visual processing, and 2) formation of the internal visuospatial area necessary for visual consciousness, which could allow rods to receive precise visual and visuospatial information, while retinal waves could link the lateral geniculate body with the cortex to form a neural space formed by membrane potential-based oscillations and photoreceptors. We propose that vision is tripartite, with three components that allow a person to make sense of the world, terming them "primary, secondary, and tertiary roles" of vision. Finally, we propose that Gamma waves that are higher in strength and volume allow communication among the retina, thalamus, and various areas of the cortex, and synchronization brings cortical faculties to the retina, while the thalamus is the link that couples the retina to the rest of the brain through activity by gamma oscillations. This novel theory lays groundwork for further research by providing a theoretical understanding that expands upon the functions of the retina, photoreceptors, and retinal plexus to include parallel processing needed to form the internal visual space that we perceive as the external world. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Visual motion detection and habitat preference in Anolis lizards.

    PubMed

    Steinberg, David S; Leal, Manuel

    2016-11-01

    The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.

  19. Illusory visual motion stimulus elicits postural sway in migraine patients

    PubMed Central

    Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi

    2015-01-01

    Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832

  20. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  1. Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System

    PubMed Central

    Abdul-Kreem, Luma Issa; Neumann, Heiko

    2015-01-01

    The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells. PMID:26554589

  2. Adaptation to oscillopsia: a psychophysical and questionnaire investigation.

    PubMed

    Grunfeld, E A; Morland, A B; Bronstein, A M; Gresty, M A

    2000-02-01

    In this study we explore the reasons why patients with bilateral vestibular failure report disparate degrees of oscillopsia. Twelve bilateral labyrinthine-defective (LD) subjects and twelve normal healthy controls were tested using a self- versus visual-motion psychophysical experiment. The LD subjects also completed a questionnaire designed to quantify the severity of handicap caused by oscillopsia. Additional standardized questionnaires were completed to identify the role of personality, personal beliefs and affective factors in adaptation to oscillopsia. During the psychophysical experiment subjects sat on a motorized Barany chair whilst viewing a large-field projected video image displayed on a screen in front of them. The chair and video image oscillated sinusoidally at 1 Hz in counter-phase at variable amplitudes which were controlled by the subject but constrained, so that the net relative motion of the chair and video image always resulted in a sinusoid with a peak velocity of 50 degrees /s. The subject's task was to find the ratio of chair versus video image motion that subjectively produced the 'most comfortable visual image'. Eye movements were recorded during the experiment in order that the net retinal image slip at the point of maximum visual comfort could be measured. The main findings in the LD subjects were that, as a group, they selected lower chair motion amplitude settings to obtain visual comfort than did the normal control subjects. Responses to the questionnaires highlighted considerable variation in reported handicap due to oscillopsia. Greater oscillopsia handicap scores were significantly correlated with a greater external locus of control (i.e. the perception of having little control over one's health). Retinal slip speed was negatively correlated with oscillopsia handicap score so that patients who suffered the greatest retinal slip were those least handicapped by oscillopsia. The results suggest that adaptation to oscillopsia is partly related to the patient's personal attitude to the recovery process and partly associated with the development of tolerance to the movement of images on the retina during self-motion. The latter is likely to be related to previously described changes in visual motion sensitivity in these patients.

  3. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  4. Seismic response of Pacific Park Plaza. I. Data and preliminary analysis

    USGS Publications Warehouse

    Celebi, M.; Safak, E.

    1992-01-01

    The objective of this paper is to present analyses of a set of acceleration response records obtained during the October 17, 1989 Loma Prieta earthquake (Ms = 7.1) from the 30-story, three-winged, ductile moment-resistant reinforced-concrete-framed Pacific Park Plaza Building, located in Emeryville, east of San Francisco, Calif. The building was constructed in 1983, and instrumented in 1985 with 21 channels of synchronized uniaxial accelerometers deployed throughout the structure, and three channels of accelerometers located at free-field outside on the north side of the building, all connected to a central recording system. In addition, a triaxial strong-motion accelerograph is deployed at free-field on the south side of the building. The predominant response modes of the building and the associated frequencies at approximately 0.4 Hz and 1.0 Hz are identified visually from the unprocessed records, and also from Fourier amplitude spectra of the processed records, which, as expected, reveal significant torsional motion. In addition, the response spectra of the free-field and basement motions are very similar. These spectra show that significant structural resonances at higher modes influence both the ground level and the free-field motions, thus rising the question as to the definition of free-field motion, at least at this site. This part of the paper includes the preliminary analyses of the data acquired from this building. Part 2 of the paper provides detailed analyses of the data using system identification techniques.

  5. On the Adaptation of Pelvic Motion by Applying 3-dimensional Guidance Forces Using TPAD.

    PubMed

    Kang, Jiyeon; Vashista, Vineet; Agrawal, Sunil K

    2017-09-01

    Pelvic movement is important to human locomotion as the center of mass is located near the center of pelvis. Lateral pelvic motion plays a crucial role to shift the center of mass on the stance leg, while swinging the other leg and keeping the body balanced. In addition, vertical pelvic movement helps to reduce metabolic energy expenditure by exchanging potential and kinetic energy during the gait cycle. However, patient groups with cerebral palsy or stroke have excessive pelvic motion that leads to high energy expenditure. In addition, they have higher chances of falls as the center ofmass could deviate outside the base of support. In this paper, a novel control method is suggested using tethered pelvic assist device (TPAD) to teach subjects to walk with a specified target pelvic trajectory while walking on a treadmill. In this method, a force field is applied to the pelvis to guide it to move on a target trajectory and correctional forces are applied, if the pelvis motion has excessive deviations from the target trajectory. Three different experimentswith healthy subjects were conducted to teach them to walk on a new target pelvic trajectory with the presented control method. For all three experiments, the baseline trajectory of the pelvis was experimentally determined for each participating subject. To design a target pelvic trajectory which is different from the baseline, Experiment I scaled up the lateral component of the baseline pelvic trajectory, while Experiment II scaled down the lateral component of the baseline trajectory. For both Experiments I and II, the controller generated a 2-D force field in the transverse plane to provide the guidance force. In this paper, seven subjects were recruited for each experiment who walked on the treadmill with suggested control methods and visual feedback of their pelvic trajectory. The results show that the subjects were able to learn the target pelvic trajectory in each experiment and also retained the training effects after the completion of the experiment. In Experiment III, both lateral and vertical components of the pelvic trajectory were scaled down from the baseline trajectory. The force field was extended to three dimensions in order to correct the vertical pelvic movement as well. Three subgroups (force feedback alone, visual feedback alone, and both force and visual feedback) were recruited to understand the effects of force feedback and visual feedback alone to distinguish the results from Experiments I and II. The results showthat a trainingmethod that combines visual and force feedback is superior to the training methods with visual or force feedback alone. We believe that the present control strategy holds potential in training and correcting abnormal pelvic movements in different patient populations.

  6. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Choice-reaction time to visual motion with varied levels of simultaneous rotary motion

    NASA Technical Reports Server (NTRS)

    Clark, B.; Stewart, J. D.

    1974-01-01

    Twelve airline pilots were studied to determine the effects of whole-body rotation on choice-reaction time to the horizontal motion of a line on a cathode-ray tube. On each trial, one of five levels of visual acceleration and five corresponding proportions of rotary acceleration were presented simultaneously. Reaction time to the visual motion decreased with increasing levels of visual motion and increased with increasing proportions of rotary acceleration. The results conflict with general theories of facilitation during double stimulation but are consistent with neural-clock model of sensory interaction in choice-reaction time.

  8. The effect of contextual cues on the encoding of motor memories.

    PubMed

    Howard, Ian S; Wolpert, Daniel M; Franklin, David W

    2013-05-01

    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.

  9. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  10. Delayed visual maturation in infants: a disorder of figure-ground separation?

    PubMed

    Harris, C M; Kriss, A; Shawkat, F; Taylor, D; Russell-Eggitt, I

    1996-01-01

    Delayed visual maturation (DVM) is characterised by visual unresponsiveness in early infancy, which subsequently improves spontaneously to normal levels. We studied the optokinetic response and recorded pattern reversal VEPs in six infants with DVM (aged 2-4 months) when they were at the stage of complete visual unresponsiveness. Although no saccades or visual tracking with the eyes or head could be elicited to visual objects, a normal full-field rapid buildup OKN response occurred when viewing biocularly or during monocular stimulation in the temporo-nasal direction of the viewing eye. Almost no monocular OKN could be elicited in the naso-temporal direction, which was significantly poorer than normal age-matched infants. No OKN quick phases were missed, and there were no other signs of "ocular motor apraxia." VEPs were normal in amplitude and latency for age. It appears, therefore, that infants with DVM are delayed in orienting to local regions of the visual field, but can respond to full-field motion. The presence of normal OKN quick-phases and slow-phases suggests normal brain stem function, and the presence of normal pattern VEPs suggests a normal retino-geniculo-striate pathway. These oculomotor and electrophysiological findings suggest delayed development of extra-striate cortical structures, possibly involving either an abnormality in figure-ground segregation or in attentional pathways.

  11. A neural basis for the spatial suppression of visual motion perception

    PubMed Central

    Liu, Liu D; Haefner, Ralf M; Pack, Christopher C

    2016-01-01

    In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283

  12. Complementary mechanisms create direction selectivity in the fly

    PubMed Central

    Haag, Juergen; Arenz, Alexander; Serbe, Etienne; Gabbiani, Fabrizio; Borst, Alexander

    2016-01-01

    How neurons become sensitive to the direction of visual motion represents a classic example of neural computation. Two alternative mechanisms have been discussed in the literature so far: preferred direction enhancement, by which responses are amplified when stimuli move along the preferred direction of the cell, and null direction suppression, where one signal inhibits the response to the subsequent one when stimuli move along the opposite, i.e. null direction. Along the processing chain in the Drosophila optic lobe, directional responses first appear in T4 and T5 cells. Visually stimulating sequences of individual columns in the optic lobe with a telescope while recording from single T4 neurons, we find both mechanisms at work implemented in different sub-regions of the receptive field. This finding explains the high degree of directional selectivity found already in the fly’s primary motion-sensing neurons and marks an important step in our understanding of elementary motion detection. DOI: http://dx.doi.org/10.7554/eLife.17421.001 PMID:27502554

  13. Adaptive particle filter for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai

    2009-10-01

    Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.

  14. The Sensory Components of High-Capacity Iconic Memory and Visual Working Memory

    PubMed Central

    Bradley, Claire; Pearson, Joel

    2012-01-01

    Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more “high-level” alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful “lower-capacity” visual working memory. PMID:23055993

  15. Head-bobbing behavior in foraging Whooping Cranes

    USGS Publications Warehouse

    Cronin, T.; Kinloch, M.; Olsen, Glenn H.

    2006-01-01

    Many species of cursorial birds 'head-bob', that is, they alternately thrust the head forward, then hold it stiII as they walk. Such a motion stabilizes visual fields intermittently and could be critical for visual search; yet the time available for stabilization vs. forward thrust varies with walking speed. Whooping Cranes (Grus americana) are extremely tall birds that visually search the ground for seeds, berries, and small prey. We examined head movements in unrestrained Whooping Cranes using digital video subsequently analyzed with a computer graphical overlay. When foraging, the cranes walk at speeds that allow the head to be held still for at least 50% of the time. This behavior is thought to balance the two needs for covering as much ground as possible and for maximizing the time for visual fixation of the ground in the search for prey. Our results strongly suggest that in cranes, and probably many other bird species, visual fixation of the ground is required for object detection and identification. The thrust phase of the head-bobbing cycle is probably also important for vision. As the head moves forward, the movement generates visual flow and motion parallax, providing visual cues for distances and the relative locations of objects. The eyes commonly change their point of fixation when the head is moving too, suggesting that they remain visually competent throughout the entire cycle of thrust and stabilization.

  16. Stroboscopic Vision as a Treatment for Space Motion Sickness

    NASA Technical Reports Server (NTRS)

    Reschke, Millard F.; Somers, Jeffrey T.; Ford, George; Krnavek, Jody M.

    2007-01-01

    Results obtained from space flight indicate that most space crews will experience some symptoms of motion sickness causing significant impact on the operational objectives that must be accomplished to assure mission success. Based on the initial work of Melvill Jones we have evaluated stroboscopic vision as a method of preventing motion sickness. Given that the data presented by professor Melvill Jones were primarily post hoc results following a study not designed to investigate motion sickness, it is unclear how motion sickness results were actually determined. Building on these original results, we undertook a three part study that was designed to investigate the effect of stroboscopic vision (either with a strobe light or LCD shutter glasses) on motion sickness using: (1) visual field reversal, (2) Reading while riding in a car (with or without external vision present), and (3) making large pitch head movements during parabolic flight.

  17. A unified dynamic neural field model of goal directed eye movements

    NASA Astrophysics Data System (ADS)

    Quinton, J. C.; Goffart, L.

    2018-01-01

    Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.

  18. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    PubMed

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  19. Usage of stereoscopic visualization in the learning contents of rotational motion.

    PubMed

    Matsuura, Shu

    2013-01-01

    Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.

  20. Trend-Centric Motion Visualization: Designing and Applying a New Strategy for Analyzing Scientific Motion Collections.

    PubMed

    Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M; Nuckley, David; Carlis, John; Keefe, Daniel F

    2014-12-01

    In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.

  1. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    PubMed

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  2. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Age and visual impairment decrease driving performance as measured on a closed-road circuit.

    PubMed

    Wood, Joanne M

    2002-01-01

    In this study the effects of visual impairment and age on driving were investigated and related to visual function. Participants were 139 licensed drivers (young, middle-aged, and older participants with normal vision, and older participants with ocular disease). Driving performance was assessed during the daytime on a closed-road driving circuit. Visual performance was assessed using a vision testing battery. Age and visual impairment had a significant detrimental effect on recognition tasks (detection and recognition of signs and hazards), time to complete driving tasks (overall course time, reversing, and maneuvering), maneuvering ability, divided attention, and an overall driving performance index. All vision measures were significantly affected by group membership. A combination of motion sensitivity, useful field of view (UFOV), Pelli-Robson letter contrast sensitivity, and dynamic acuity could predict 50% of the variance in overall driving scores. These results indicate that older drivers with either normal vision or visual impairment had poorer driving performance compared with younger or middle-aged drivers with normal vision. The inclusion of tests such as motion sensitivity and the UFOV significantly improve the predictive power of vision tests for driving performance. Although such measures may not be practical for widespread screening, their application in selected cases should be considered.

  4. Visual field information in Nap-of-the-Earth flight by teleoperated Helmet-Mounted displays

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Kohn, S.; Merhav, S. J.

    1991-01-01

    The human ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays originates from a Forward Looking Infrared Radiation Camera, gimbal-mounted at the front of the aircraft and slaved to the pilot's line-of-sight, to obtain wide-angle visual coverage. Although these displays are proved to be effective in Apache and Cobra helicopter night operations, they demand very high pilot proficiency and work load. Experimental work presented in the paper has shown that part of the difficulties encountered in vehicular control by means of these displays can be attributed to the narrow viewing aperture and head/camera slaving system phase lags. Both these shortcomings will impair visuo-vestibular coordination, when voluntary head rotation is present. This might result in errors in estimating the Control-Oriented Visual Field Information vital in vehicular control, such as the vehicle yaw rate or the anticipated flight path, or might even lead to visuo-vestibular conflicts (motion sickness). Since, under these conditions, the pilot will tend to minimize head rotation, the full wide-angle coverage of the Helmet-Mounted Display, provided by the line-of-sight slaving system, is not always fully utilized.

  5. Susceptibility of cat and squirrel monkey to motion sickness induced by visual stimulation: Correlation with susceptibility to vestibular stimulation

    NASA Technical Reports Server (NTRS)

    Daunton, N. G.; Fox, R. A.; Crampton, G. H.

    1984-01-01

    Experiments in which the susceptibility of both cats and squirrel monkeys to motion sickness induced by visual stimulation are documented. In addition, it is shown that in both species those individual subjects most highly susceptible to sickness induced by passive motion are also those most likely to become motion sick from visual (optokinetic) stimulation alone.

  6. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    PubMed

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Perceived shifts of flashed stimuli by visible and invisible object motion.

    PubMed

    Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke

    2003-01-01

    Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.

  8. Optical sensor feedback assistive technology to enable patients to play an active role in the management of their body dynamics during radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Parkhurst, J. M.; Price, G. J.; Sharrock, P. J.; Stratford, J.; Moore, C. J.

    2013-04-01

    Patient motion during treatment is well understood as a prime factor limiting radiotherapy success, with the risks most pronounced in modern safety critical therapies promising the greatest benefit. In this paper we describe a real-time visual feedback device designed to help patients to actively manage their body position, pose and motion. In addition to technical device details, we present preliminary trial results showing that its use enables volunteers to successfully manage their respiratory motion. The device enables patients to view their live body surface measurements relative to a prior reference, operating on the concept that co-operative engagement with patients will both improve geometric conformance and remove their perception of isolation, in turn easing stress related motion. The device is driven by a real-time wide field optical sensor system developed at The Christie. Feedback is delivered through three intuitive visualization modes of hierarchically increasing display complexity. The device can be used with any suitable display technology; in the presented study we use both personal video glasses and a standard LCD projector. The performance characteristics of the system were measured, with the frame rate, throughput and latency of the feedback device being 22.4 fps, 47.0 Mbps, 109.8 ms, and 13.7 fps, 86.4 Mbps, 119.1 ms for single and three-channel modes respectively. The pilot study, using ten healthy volunteers over three sessions, shows that the use of visual feedback resulted in both a reduction in the participants' respiratory amplitude, and a decrease in their overall body motion variability.

  9. A neural model of visual figure-ground segregation from kinetic occlusion.

    PubMed

    Barnes, Timothy; Mingolla, Ennio

    2013-01-01

    Freezing is an effective defense strategy for some prey, because their predators rely on visual motion to distinguish objects from their surroundings. An object moving over a background progressively covers (deletes) and uncovers (accretes) background texture while simultaneously producing discontinuities in the optic flow field. These events unambiguously specify kinetic occlusion and can produce a crisp edge, depth perception, and figure-ground segmentation between identically textured surfaces--percepts which all disappear without motion. Given two abutting regions of uniform random texture with different motion velocities, one region appears to be situated farther away and behind the other (i.e., the ground) if its texture is accreted or deleted at the boundary between the regions, irrespective of region and boundary velocities. Consequently, a region with moving texture appears farther away than a stationary region if the boundary is stationary, but it appears closer (i.e., the figure) if the boundary is moving coherently with the moving texture. A computational model of visual areas V1 and V2 shows how interactions between orientation- and direction-selective cells first create a motion-defined boundary and then signal kinetic occlusion at that boundary. Activation of model occlusion detectors tuned to a particular velocity results in the model assigning the adjacent surface with a matching velocity to the far depth. A weak speed-depth bias brings faster-moving texture regions forward in depth in the absence of occlusion (shearing motion). These processes together reproduce human psychophysical reports of depth ordering for key cases of kinetic occlusion displays. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Objective Analysis of Performance of Activities of Daily Living in People With Central Field Loss.

    PubMed

    Pardhan, Shahina; Latham, Keziah; Tabrett, Daryl; Timmis, Matthew A

    2015-11-01

    People with central visual field loss (CFL) adopt various strategies to complete activities of daily living (ADL). Using objective movement analysis, we compared how three ADLs were completed by people with CFL compared with age-matched, visually healthy individuals. Fourteen participants with CFL (age 81 ± 10 years) and 10 age-matched, visually healthy (age 75 ± 5 years) participated. Three ADLs were assessed: pick up food from a plate, pour liquid from a bottle, and insert a key in a lock. Participants with CFL completed each ADL habitually (as they would in their home). Data were compared with visually healthy participants who were asked to complete the tasks as they would normally, but under specified experimental conditions. Movement kinematics were compared using three-dimension motion analysis (Vicon). Visual functions (distance and near acuities, contrast sensitivity, visual fields) were recorded. All CFL participants were able to complete each ADL. However, participants with CFL demonstrated significantly (P < 0.05) longer overall movement times, shorter minimum viewing distance, and, for two of the three ADL tasks, needed more online corrections in the latter part of the movement. Results indicate that, despite the adoption of various habitual strategies, participants with CFL still do not perform common daily living tasks as efficiently as healthy subjects. Although indices suggesting feed-forward planning are similar, they made more movement corrections and increased time for the latter portion of the action, indicating a more cautious/uncertain approach. Various kinematic indices correlated significantly to visual function parameters including visual acuity and midperipheral visual field loss.

  11. Human rather than ape-like orbital morphology allows much greater lateral visual field expansion with eye abduction

    PubMed Central

    Denion, Eric; Hitier, Martin; Levieil, Eric; Mouriaux, Frédéric

    2015-01-01

    While convergent, the human orbit differs from that of non-human apes in that its lateral orbital margin is significantly more rearward. This rearward position does not obstruct the additional visual field gained through eye motion. This additional visual field is therefore considered to be wider in humans than in non-human apes. A mathematical model was designed to quantify this difference. The mathematical model is based on published computed tomography data in the human neuro-ocular plane (NOP) and on additional anatomical data from 100 human skulls and 120 non-human ape skulls (30 gibbons; 30 chimpanzees / bonobos; 30 orangutans; 30 gorillas). It is used to calculate temporal visual field eccentricity values in the NOP first in the primary position of gaze then for any eyeball rotation value in abduction up to 45° and any lateral orbital margin position between 85° and 115° relative to the sagittal plane. By varying the lateral orbital margin position, the human orbit can be made “non-human ape-like”. In the Pan-like orbit, the orbital margin position (98.7°) was closest to the human orbit (107.1°). This modest 8.4° difference resulted in a large 21.1° difference in maximum lateral visual field eccentricity with eyeball abduction (Pan-like: 115°; human: 136.1°). PMID:26190625

  12. Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease

    PubMed Central

    Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.

    2013-01-01

    We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256

  13. Effects of visual and motion simulation cueing systems on pilot performance during takeoffs with engine failures

    NASA Technical Reports Server (NTRS)

    Parris, B. L.; Cook, A. M.

    1978-01-01

    Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.

  14. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  15. Motion Direction Biases and Decoding in Human Visual Cortex

    PubMed Central

    Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297

  16. The Monocular Duke of Urbino.

    PubMed

    Schwartz, Stephen G; Leffler, Christopher T; Chavis, Pamela S; Khan, Faraaz; Bermudez, Dennis; Flynn, Harry W

    2016-01-01

    Federico da Montefeltro (1422-1482), the Duke of Urbino, was a well-known historical figure during the Italian Renaissance. He is the subject of a famous painting by Piero della Francesca (1416-1492), which displays the Duke from the left and highlights his oddly shaped nose. The Duke is known to have lost his right eye due to an injury sustained during a jousting tournament, which is why the painting portrays him from the left. Some historians teach that the Duke subsequently underwent nasal surgery to remove tissue from the bridge of his nose in order to expand his visual field in an attempt to compensate for the lost eye. In theory, removal of a piece of the nose may have expanded the nasal visual field, especially the "eye motion visual field" that encompasses eye movements. In addition, removing part of the nose may have reduced some of the effects of ocular parallax. Finally, shifting of the visual egocenter may have occurred, although this seems likely unrelated to the proposed nasal surgery. Whether or not the Duke actually underwent the surgery cannot be proven, but it seems unlikely that this would have substantially improved his visual function.

  17. Visual adaptation alters the apparent speed of real-world actions.

    PubMed

    Mather, George; Sharman, Rebecca J; Parsons, Todd

    2017-07-27

    The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30 seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30 seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video ('re-normalisation'). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed.

  18. Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders

    PubMed Central

    Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole

    2015-01-01

    Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342

  19. Motion sickness increases functional connectivity between visual motion and nausea-associated brain regions.

    PubMed

    Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly

    2017-01-01

    The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. The relationship of global form and motion detection to reading fluency.

    PubMed

    Englund, Julia A; Palomares, Melanie

    2012-08-15

    Visual motion processing in typical and atypical readers has suggested aspects of reading and motion processing share a common cortical network rooted in dorsal visual areas. Few studies have examined the relationship between reading performance and visual form processing, which is mediated by ventral cortical areas. We investigated whether reading fluency correlates with coherent motion detection thresholds in typically developing children using random dot kinematograms. As a comparison, we also evaluated the correlation between reading fluency and static form detection thresholds. Results show that both dorsal and ventral visual functions correlated with components of reading fluency, but that they have different developmental characteristics. Motion coherence thresholds correlated with reading rate and accuracy, which both improved with chronological age. Interestingly, when controlling for non-verbal abilities and age, reading accuracy significantly correlated with thresholds for coherent form detection but not coherent motion detection in typically developing children. Dorsal visual functions that mediate motion coherence seem to be related maturation of broad cognitive functions including non-verbal abilities and reading fluency. However, ventral visual functions that mediate form coherence seem to be specifically related to accurate reading in typically developing children. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. The Continuous Wagon Wheel Illusion and the ‘When’ Pathway of the Right Parietal Lobe: A Repetitive Transcranial Magnetic Stimulation Study

    PubMed Central

    VanRullen, Rufin; Pascual-Leone, Alvaro; Battelli, Lorella

    2008-01-01

    A continuous periodic motion stimulus can sometimes be perceived moving in the wrong direction. These illusory reversals have been taken as evidence that part of the motion perception system samples its inputs as a series of discrete snapshots –although other explanations of the phenomenon have been proposed, that rely on the spurious activation of low-level motion detectors in early visual areas. We have hypothesized that the right inferior parietal lobe (‘when’ pathway) plays a critical role in timing perceptual events relative to one another, and thus we examined the role of the right parietal lobe in the generation of this “continuous Wagon Wheel Illusion” (c-WWI). Consistent with our hypothesis, we found that the illusion was effectively weakened following disruption of right, but not left, parietal regions by low frequency repetitive transcranial magnetic stimulation (1 Hz, 10 min). These results were independent of whether the motion stimulus was shown in the left or the right visual field. Thus, the c-WWI appears to depend on higher-order attentional mechanisms that are supported by the ‘when’ pathway of the right parietal lobe. PMID:18682842

  2. Trend-Centric Motion Visualization: Designing and Applying a new Strategy for Analyzing Scientific Motion Collections

    PubMed Central

    Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M.; Nuckley, David; Carlis, John; Keefe, Daniel F

    2017-01-01

    In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection’s trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool’s effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics. PMID:26356978

  3. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    DTIC Science & Technology

    2015-03-01

    SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new

  4. Annotated Bibliography of USAARL Technical and Letter Reports. Volume 2. October 1988 - April 1991

    DTIC Science & Technology

    1991-05-01

    G. Lilienthal, Robert S. Kennedy, Jennifer E. Fowlkes, and Dennis R. Baltzley. As technelogy has been developed to provide improved visual and motion...Gower, Jr., and Jennifer Fowlkes. The U.S. Army Aeromedical Research Laboratory conducted field studies of operational flight simulators to assess the...Daniel W. Gower, Jr., and Jennifer Fowlkes. The U.S. Army Aeromedical Research Laboratory conducted field studies of operational flight simulators to

  5. Effects of translational and rotational motions and display polarity on visual performance.

    PubMed

    Feng, Wen-Yang; Tseng, Feng-Yi; Chao, Chin-Jung; Lin, Chiuhsiang Joe

    2008-10-01

    This study investigated effects of both translational and rotational motion and display polarity on a visual identification task. Three different motion types--heave, roll, and pitch--were compared with the static (no motion) condition. The visual task was presented on two display polarities, black-on-white and white-on-black. The experiment was a 4 (motion conditions) x 2 (display polarities) within-subjects design with eight subjects (six men and two women; M age = 25.6 yr., SD = 3.2). The dependent variables used to assess the performance on the visual task were accuracy and reaction time. Motion environments, especially the roll condition, had statistically significant effects on the decrement of accuracy and reaction time. The display polarity was significant only in the static condition.

  6. Effects of Age-Related Macular Degeneration on Driving Performance

    PubMed Central

    Wood, Joanne M.; Black, Alex A.; Mallon, Kerry; Kwan, Anthony S.; Owsley, Cynthia

    2018-01-01

    Purpose To explore differences in driving performance of older adults with age-related macular degeneration (AMD) and age-matched controls, and to identify the visual determinants of driving performance in this population. Methods Participants included 33 older drivers with AMD (mean age [M] = 76.6 ± 6.1 years; better eye Age-Related Eye Disease Study grades: early [61%] and intermediate [39%]) and 50 age-matched controls (M = 74.6 ± 5.0 years). Visual tests included visual acuity, contrast sensitivity, visual fields, and motion sensitivity. On-road driving performance was assessed in a dual-brake vehicle by an occupational therapist (masked to drivers' visual status). Outcome measures included driving safety ratings (scale of 1–10, where higher values represented safer driving), types of driving behavior errors, locations at which errors were made, and number of critical errors (CE) requiring an instructor intervention. Results Drivers with AMD were rated as less safe than controls (4.8 vs. 6.2; P = 0.012); safety ratings were associated with AMD severity (early: 5.5 versus intermediate: 3.7), even after adjusting for age. Drivers with AMD had higher CE rates than controls (1.42 vs. 0.36, respectively; rate ratio 3.05, 95% confidence interval 1.47–6.36, P = 0.003) and exhibited more observation, lane keeping, and gap selection errors and made more errors at traffic light–controlled intersections (P < 0.05). Only motion sensitivity was significantly associated with driving safety in the AMD drivers (P = 0.005). Conclusions Drivers with early and intermediate AMD can exhibit impairments in their driving performance, particularly during complex driving situations; motion sensitivity was most strongly associated with driving performance. These findings have important implications for assessing the driving ability of older drivers with visual impairment. PMID:29340641

  7. Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations

    NASA Technical Reports Server (NTRS)

    Mitchell, David G.; Hart, Daniel C.

    1993-01-01

    The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.

  8. Oscillating Droplets and Incompressible Liquids: Slow-Motion Visualization of Experiments with Fluids

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    We present fascinating simple demonstration experiments recorded with high-speed cameras in the field of fluid dynamics. Examples include oscillations of falling droplets, effects happening upon impact of a liquid droplet into a liquid, the disintegration of extremely large droplets in free fall and the consequences of incompressibility. (Contains…

  9. Elementary Visual Hallucinations and Their Relationships to Neural Pattern-Forming Mechanisms

    ERIC Educational Resources Information Center

    Billock, Vincent A.; Tsou, Brian H.

    2012-01-01

    An extraordinary variety of experimental (e.g., flicker, magnetic fields) and clinical (epilepsy, migraine) conditions give rise to a surprisingly common set of elementary hallucinations, including spots, geometric patterns, and jagged lines, some of which also have color, depth, motion, and texture. Many of these simple hallucinations fall into a…

  10. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  11. Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.

    PubMed

    Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko

    2008-01-01

    To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.

  12. Amplifying the helicopter drift in a conformal HMD

    NASA Astrophysics Data System (ADS)

    Schmerwitz, Sven; Knabl, Patrizia M.; Lueken, Thomas; Doehler, Hans-Ullrich

    2016-05-01

    Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. But whenever natural vision is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of "drift indication" that mitigates the influence of a degraded visual environment. Generally humans derive ego motion by the perceived environmental object flow. The visual cues perceived are located close to the helicopter, therefore even small movements can be recognized. This fact was used to investigate a modified drift indication. To enhance the perception of ego motion in a conformal HMD symbol set the measured movement was used to generate a pattern motion in the forward field of view close or on the landing pad. The paper will discuss the method of amplified ego motion drift indication. Aspects concerning impact factors like visualization type, location, gain and more will be addressed. Further conclusions from previous studies, a high fidelity experiment and a part task experiment, will be provided. A part task study will be presented that compared different amplified drift indications against a predictor. 24 participants, 15 holding a fixed wing license and 4 helicopter pilots, had to perform a dual task on a virtual reality headset. A simplified control model was used to steer a "helicopter" down to a landing pad while acknowledging randomly placed characters.

  13. Synaptic Correlates of Low-Level Perception in V1.

    PubMed

    Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves

    2016-04-06

    The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed to require attention-related processes, such as top-down feedback from higher cortical areas. Using intracellular techniques in the anesthetized cat and novel analysis methods, we reveal unexpected structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. These structural-functional biases provide a substrate, within V1, for contour detection and, more unexpectedly, global motion flow sensitivity at saccadic speed, even in the absence of attentional processes. We argue for the concept of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy changes with retinal flow statistics, and more generally for a renewed focus on intracortical computation. Copyright © 2016 the authors 0270-6474/16/363925-18$15.00/0.

  14. Extracting heading and temporal range from optic flow: Human performance issues

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Perrone, John A.; Stone, Leland; Banks, Martin S.; Crowell, James A.

    1993-01-01

    Pilots are able to extract information about their vehicle motion and environmental structure from dynamic transformations in the out-the-window scene. In this presentation, we focus on the information in the optic flow which specifies vehicle heading and distance to objects in the environment, scaled to a temporal metric. In particular, we are concerned with modeling how the human operators extract the necessary information, and what factors impact their ability to utilize the critical information. In general, the psychophysical data suggest that the human visual system is fairly robust to degradations in the visual display, e.g., reduced contrast and resolution or restricted field of view. However, extraneous motion flow, i.e., introduced by sensor rotation, greatly compromises human performance. The implications of these models and data for enhanced/synthetic vision systems are discussed.

  15. Defective motion processing in children with cerebral visual impairment due to periventricular white matter damage.

    PubMed

    Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D

    2012-07-01

    We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.

  16. A retinal code for motion along the gravitational and body axes

    PubMed Central

    Sabbah, Shai; Gemmer, John A.; Bhatia-Lin, Ananya; Manoff, Gabrielle; Castro, Gabriel; Siegel, Jesse K.; Jeffery, Nathan; Berson, David M.

    2017-01-01

    Summary Self-motion triggers complementary visual and vestibular reflexes supporting image-stabilization and balance. Translation through space produces one global pattern of retinal image motion (optic flow), rotation another. We show that each subtype of direction-selective ganglion cell (DSGC) adjusts its direction preference topographically to align with specific translatory optic flow fields, creating a neural ensemble tuned for a specific direction of motion through space. Four cardinal translatory directions are represented, aligned with two axes of high adaptive relevance: the body and gravitational axes. One subtype maximizes its output when the mouse advances, others when it retreats, rises, or falls. ON-DSGCs and ON-OFF-DSGCs share the same spatial geometry but weight the four channels differently. Each subtype ensemble is also tuned for rotation. The relative activation of DSGC channels uniquely encodes every translation and rotation. Though retinal and vestibular systems both encode translatory and rotatory self-motion, their coordinate systems differ. PMID:28607486

  17. Flow visualization in superfluid helium-4 using He2 molecular tracers

    NASA Astrophysics Data System (ADS)

    Guo, Wei

    Flow visualization in superfluid helium is challenging, yet crucial for attaining a detailed understanding of quantum turbulence. Two problems have impeded progress: finding and introducing suitable tracers that are small yet visible; and unambiguous interpretation of the tracer motion. We show that metastable He2 triplet molecules are outstanding tracers compared with other particles used in helium. These molecular tracers have small size and relatively simple behavior in superfluid helium: they follow the normal fluid motion at above 1 K and will bind to quantized vortex lines below about 0.6 K. A laser-induced fluorescence technique has been developed for imaging the He2 tracers. We will present our recent experimental work on studying the normal-fluid motion by tracking thin lines of He2 tracers created via femtosecond laser-field ionization in helium. We will also discuss a newly launched experiment on visualizing vortex lines in a magnetically levitated superfluid helium drop by imaging the He2 tracers trapped on the vortex cores. This experiment will enable unprecedented insight into the behavior of a rotating superfluid drop and will untangle several key issues in quantum turbulence research. We acknowledge the support from the National Science Foundation under Grant No. DMR-1507386 and the US Department of Energy under Grant No. DE-FG02 96ER40952.

  18. Building large mosaics of confocal edomicroscopic images using visual servoing.

    PubMed

    Rosa, Benoît; Erden, Mustafa Suphi; Vercauteren, Tom; Herman, Benoît; Szewczyk, Jérôme; Morel, Guillaume

    2013-04-01

    Probe-based confocal laser endomicroscopy provides real-time microscopic images of tissues contacted by a small probe that can be inserted in vivo through a minimally invasive access. Mosaicking consists in sweeping the probe in contact with a tissue to be imaged while collecting the video stream, and process the images to assemble them in a large mosaic. While most of the literature in this field has focused on image processing, little attention has been paid so far to the way the probe motion can be controlled. This is a crucial issue since the precision of the probe trajectory control drastically influences the quality of the final mosaic. Robotically controlled motion has the potential of providing enough precision to perform mosaicking. In this paper, we emphasize the difficulties of implementing such an approach. First, probe-tissue contacts generate deformations that prevent from properly controlling the image trajectory. Second, in the context of minimally invasive procedures targeted by our research, robotic devices are likely to exhibit limited quality of the distal probe motion control at the microscopic scale. To cope with these problems visual servoing from real-time endomicroscopic images is proposed in this paper. It is implemented on two different devices (a high-accuracy industrial robot and a prototype minimally invasive device). Experiments on different kinds of environments (printed paper and ex vivo tissues) show that the quality of the visually servoed probe motion is sufficient to build mosaics with minimal distortion in spite of disturbances.

  19. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume; Burke, Darren

    2017-02-01

    Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.

  20. Hawk Eyes I: Diurnal Raptors Differ in Visual Fields and Degree of Eye Movement

    PubMed Central

    O'Rourke, Colleen T.; Hall, Margaret I.; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-01-01

    Background Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. Methodology/Principal Findings We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. Conclusions We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while perching. PMID:20877645

  1. Hawk eyes I: diurnal raptors differ in visual fields and degree of eye movement.

    PubMed

    O'Rourke, Colleen T; Hall, Margaret I; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-09-22

    Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while perching.

  2. Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2015-01-01

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341

  3. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  4. Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian

    2007-01-01

    The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.

  5. Estimation of bio-signal based on human motion for integrated visualization of daily-life.

    PubMed

    Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko

    2007-01-01

    This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.

  6. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    PubMed

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  7. Visualizing the ground motions of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Chourasia, A.; Cutchin, S.; Aagaard, Brad T.

    2008-01-01

    With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism. ?? 2008 Elsevier Ltd.

  8. Recovery of biological motion perception and network plasticity after cerebellar tumor removal.

    PubMed

    Sokolov, Arseny A; Erb, Michael; Grodd, Wolfgang; Tatagiba, Marcos S; Frackowiak, Richard S J; Pavlova, Marina A

    2014-10-01

    Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.

  9. Empirical comparison of a fixed-base and a moving-base simulation of a helicopter engaged in visually conducted slalom runs

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.

    1977-01-01

    Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.

  10. Implied motion language can influence visual spatial memory.

    PubMed

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  11. Dynamic Visual Acuity and Landing Sickness in Crewmembers Returning from Long-Duration Spaceflight

    NASA Technical Reports Server (NTRS)

    Rosenberg, M.J.F; Peters, B.T.; Reschke, M. F.

    2016-01-01

    Long-term exposure to microgravity causes sensorimotor adaptations that result in functional deficits upon returning to a gravitational environment. At landing the vestibular system and the central nervous system, responsible for coordinating head and eye movements, are adapted to microgravity and must re-adapt to the gravitational environment. This re-adaptation causes decrements in gaze control and dynamic visual acuity, with astronauts reporting oscillopsia and blurred vision. Dynamic visual acuity (DVA) is assessed using an oscillating chair developed in the Neuroscience Laboratory at JSC. This chair is lightweight and easily portable for quick deployment in the field. The base of the chair is spring-loaded and allows for manual oscillation of the subject. Using a metronome, the chair is vertically oscillated plus or minus 2 cm at 2 Hz by an operator, to simulate walking. While the subject is being oscillated, they are asked to discern the direction of Landolt-C optotypes of varying sizes and record their direction using a gamepad. The visual acuity thresholds are determined using an algorithm that alters the size of the optotype based on the previous response of the subject using a forced-choice best parameter estimation that is able to rapidly converge on the threshold value. Visual acuity thresholds were determined both for static (seated) and dynamic (oscillating) conditions. Dynamic visual acuity is defined as the difference between the dynamic and static conditions. Dynamic visual acuity measures will be taken prior to flight (typically L-180, L-90, and L-60) and up to eight times after landing, including up to 3 times on R plus 0. Follow up measurements will be taken at R plus 1 (approximately 36 hours after landing). Long-duration International Space Station crewmembers will be tested once at the refueling stop in Europe and once again upon return to Johnson Space Center. In addition to DVA, subjective ratings of motion sickness will be recorded throughout the testing. Using the chair as a portable and reliable way to test DVA, we aim to test returning astronauts to assess the amount of retinal slip that they experience. By comparing these measurements to their motion sickness scores (using a scale of 1 to 20 where 20 is vomiting), we will correlate the amount of retinal slip to the level of motion sickness experienced. In addition to testing this in returning astronauts, we will perform ground-based studies to determine the effectiveness of stroboscopic goggles in reducing retinal slip and improving DVA. Finally, we will employ stroboscopic goggles in the field to astronauts experiencing high levels of motion sickness to minimize retinal slip and reduce their symptoms.

  12. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    PubMed

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Visual Features Involving Motion Seen from Airport Control Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion

    2010-01-01

    Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.

  14. Slow and fast visual motion channels have independent binocular-rivalry stages.

    PubMed Central

    van de Grind, W. A.; van Hof, P.; van der Smagt, M. J.; Verstraten, F. A.

    2001-01-01

    We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness. PMID:11270442

  15. The interaction of Bayesian priors and sensory data and its neural circuit implementation in visually-guided movement

    PubMed Central

    Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.

    2012-01-01

    Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286

  16. Visualization of Heart Sounds and Motion Using Multichannel Sensor

    NASA Astrophysics Data System (ADS)

    Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko

    2010-06-01

    As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.

  17. Visual gravitational motion and the vestibular system in humans

    PubMed Central

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-01-01

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761

  18. Visual gravitational motion and the vestibular system in humans.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  19. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Time-evolving of very large-scale motions in a turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Hwang, Jinyul; Lee, Jin; Sung, Hyung Jin; Zaki, Tamer A.

    2014-11-01

    Direct numerical simulation (DNS) data of a turbulent channel flow at Reτ = 930 was scrutinized to investigate the formation of very large-scale motions (VLSMs) by merging of two large-scale motions (LSMs), aligned in the streamwise direction. We mainly focused on the supportive motions by the near-wall streaks during the merging of the outer LSMs. From visualization of the instantaneous flow fields, several low-speed streaks in the near-wall region were collected in the spanwise direction, when LSMs were concatenated in the outer region. The magnitude of the streamwise velocity fluctuations in the streaks was intensified during the spanwise merging of the near-wall streaks. Conditionally-averaged velocity fields around the merging of the outer LSMs showed that the intensified near-wall motions were induced by the outer LSMs and extended over the near-wall regions. The intense near-wall motions influence the formation of the outer low-speed regions as well as the reduction of the convection velocity of the downstream LSMs. The interaction between the near-wall and the outer motions is the essential origin of the different convection velocities of the upstream and downstream LSMs for the formation process of VLSMs by merging. This work was supported by the Creative Research Initiatives (No. 2014-001493) program of the National Research Foundation of Korea (MSIP) and partially supported by KISTI under the Strategic Supercomputing Support Program.

  1. Experimental and Analytic Evaluation of the Effects of Visual and Motion Simulation in SH-3 Helicopter Training. Technical Report 85-002.

    ERIC Educational Resources Information Center

    Pfeiffer, Mark G.; Scott, Paul G.

    A fly-only group (N=16) of Navy replacement pilots undergoing fleet readiness training in the SH-3 helicopter was compared with groups pre-trained on Device 2F64C with: (1) visual only (N=13); (2) no visual/no motion (N=14); and (3) one visual plus motion group (N=19). Groups were compared for their SH-3 helicopter performance in the transition…

  2. Vestibular Activation Differentially Modulates Human Early Visual Cortex and V5/MT Excitability and Response Entropy

    PubMed Central

    Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada

    2013-01-01

    Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031

  3. Analysis of motion in speed skating

    NASA Astrophysics Data System (ADS)

    Koga, Yuzo; Nishimura, Tetsu; Watanabe, Naoki; Okamoto, Kousuke; Wada, Yuhei

    1997-03-01

    A motion on sports has been studied by many researchers from the view of the medical, psychological and mechanical fields. Here, we try to analyze a speed skating motion dynamically for an aim of performing the best record. As an official competition of speed skating is performed on the round rink, the skating motion must be studied on the three phases, that is, starting phase, straight and curved course skating phase. It is indispensable to have a visual data of a skating motion in order to analyze kinematically. So we took a several subject's skating motion by 8 mm video cameras in order to obtain three dimensional data. As the first step, the movement of the center of gravity of skater (abbreviate to C. G.) is discussed in this paper, because a skating motion is very complicated. The movement of C. G. will give an information of the reaction force to a skate blade from the surface of ice. We discuss the discrepancy of several skating motion by studied subjects. Our final goal is to suggest the best skating form for getting the finest record.

  4. Human heart rate variability relation is unchanged during motion sickness

    NASA Technical Reports Server (NTRS)

    Mullen, T. J.; Berger, R. D.; Oman, C. M.; Cohen, R. J.

    1998-01-01

    In a study of 18 human subjects, we applied a new technique, estimation of the transfer function between instantaneous lung volume (ILV) and instantaneous heart rate (HR), to assess autonomic activity during motion sickness. Two control recordings of ILV and electrocardiogram (ECG) were made prior to the development of motion sickness. During the first, subjects were seated motionless, and during the second they were seated rotating sinusoidally about an earth vertical axis. Subjects then wore prism goggles that reverse the left-right visual field and performed manual tasks until they developed moderate motion sickness. Finally, ILV and ECG were recorded while subjects maintained a relatively constant level of sickness by intermittent eye closure during rotation with the goggles. Based on analyses of ILV to HR transfer functions from the three conditions, we were unable to demonstrate a change in autonomic control of heart rate due to rotation alone or due to motion sickness. These findings do not support the notion that moderate motion sickness is manifested as a generalized autonomic response.

  5. Ventral and dorsal streams processing visual motion perception (FDG-PET study)

    PubMed Central

    2012-01-01

    Background Earlier functional imaging studies on visually induced self-motion perception (vection) disclosed a bilateral network of activations within primary and secondary visual cortex areas which was combined with signal decreases, i.e., deactivations, in multisensory vestibular cortex areas. This finding led to the concept of a reciprocal inhibitory interaction between the visual and vestibular systems. In order to define areas involved in special aspects of self-motion perception such as intensity and duration of the perceived circular vection (CV) or the amount of head tilt, correlation analyses of the regional cerebral glucose metabolism, rCGM (measured by fluorodeoxyglucose positron-emission tomography, FDG-PET) and these perceptual covariates were performed in 14 healthy volunteers. For analyses of the visual-vestibular interaction, the CV data were compared to a random dot motion stimulation condition (not inducing vection) and a control group at rest (no stimulation at all). Results Group subtraction analyses showed that the visual-vestibular interaction was modified during CV, i.e., the activations within the cerebellar vermis and parieto-occipital areas were enhanced. The correlation analysis between the rCGM and the intensity of visually induced vection, experienced as body tilt, showed a relationship for areas of the multisensory vestibular cortical network (inferior parietal lobule bilaterally, anterior cingulate gyrus), the medial parieto-occipital cortex, the frontal eye fields and the cerebellar vermis. The “earlier” multisensory vestibular areas like the parieto-insular vestibular cortex and the superior temporal gyrus did not appear in the latter analysis. The duration of perceived vection after stimulus stop was positively correlated with rCGM in medial temporal lobe areas bilaterally, which included the (para-)hippocampus, known to be involved in various aspects of memory processing. The amount of head tilt was found to be positively correlated with the rCGM of bilateral basal ganglia regions responsible for the control of motor function of the head. Conclusions Our data gave further insights into subfunctions within the complex cortical network involved in the processing of visual-vestibular interaction during CV. Specific areas of this cortical network could be attributed to the ventral stream (“what” pathway) responsible for the duration after stimulus stop and to the dorsal stream (“where/how” pathway) responsible for intensity aspects. PMID:22800430

  6. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development

    PubMed Central

    Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin

    2017-01-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122

  7. Adaptive Registration of Varying Contrast-Weighted Images for Improved Tissue Characterization (ARCTIC): Application to T1 Mapping

    PubMed Central

    Roujol, Sébastien; Foppa, Murilo; Weingartner, Sebastian; Manning, Warren J.; Nezafat, Reza

    2014-01-01

    Purpose To propose and evaluate a novel non-rigid image registration approach for improved myocardial T1 mapping. Methods Myocardial motion is estimated as global affine motion refined by a novel local non-rigid motion estimation algorithm. A variational framework is proposed, which simultaneously estimates motion field and intensity variations, and uses an additional regularization term to constrain the deformation field using automatic feature tracking. The method was evaluated in 29 patients by measuring the DICE similarity coefficient (DSC) and the myocardial boundary error (MBE) in short axis and four chamber data. Each image series was visually assessed as “no motion” or “with motion”. Overall T1 map quality and motion artifacts were assessed in the 85 T1 maps acquired in short axis view using a 4-point scale (1-non diagnostic/severe motion artifact, 4-excellent/no motion artifact). Results Increased DSC (0.78±0.14 to 0.87±0.03, p<0.001), reduced MBE (1.29±0.72mm to 0.84±0.20mm, p<0.001), improved overall T1 map quality (2.86±1.04 to 3.49±0.77, p<0.001), and reduced T1 map motion artifacts (2.51±0.84 to 3.61±0.64, p<0.001) were obtained after motion correction of “with motion” data (~56% of data). Conclusion The proposed non-rigid registration approach reduces the respiratory-induced motion that occurs during breath-hold T1 mapping, and significantly improves T1 map quality. PMID:24798588

  8. Experience affects the use of ego-motion signals during 3D shape perception.

    PubMed

    Jain, Anshul; Backus, Benjamin T

    2010-12-29

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.

  9. Low field domain wall dynamics in artificial spin-ice basis structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, J.; School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798; Goolaup, S.

    2015-10-28

    Artificial magnetic spin-ice nanostructures provide an ideal platform for the observation of magnetic monopoles. The formation of a magnetic monopole is governed by the motion of a magnetic charge carrier via the propagation of domain walls (DWs) in a lattice. To date, most experiments have been on the static visualization of DW propagation in the lattice. In this paper, we report on the low field dynamics of DW in a unit spin-ice structure measured by magnetoresistance changes. Our results show that reversible DW propagation can be initiated within the spin-ice basis. The initial magnetization configuration of the unit structure stronglymore » influences the direction of DW motion in the branches. Single or multiple domain wall nucleation can be induced in the respective branches of the unit spin ice by the direction of the applied field.« less

  10. Dynamic registration of an optical see-through HMD into a wide field-of-view rotorcraft flight simulation environment

    NASA Astrophysics Data System (ADS)

    Viertler, Franz; Hajek, Manfred

    2015-05-01

    To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.

  11. Vapour Pressure and Adiabatic Cooling from Champagne: Slow-Motion Visualization of Gas Thermodynamics

    ERIC Educational Resources Information Center

    Vollmer, Michael; Mollmann, Klaus-Peter

    2012-01-01

    The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…

  12. The Efficacy of Dextroamphetamine as a Motion Sickness Countermeasure for the Use in Military Operational Environments

    DTIC Science & Technology

    2008-07-09

    movement and to ensure head-centered movement during rotation. The subject’s gaze was directed to a black visual field inside the device to provide a...vertical nystagmus . (NAMI-1079 NASA Order No. R-93). Pensacola, FL: Naval Aerospace Medical Institute. Homick, J. L., Kohl, R. L., Reschke, M. F

  13. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  14. fMRI response during visual motion stimulation in patients with late whiplash syndrome.

    PubMed

    Freitag, P; Greenlee, M W; Wachter, K; Ettlin, T M; Radue, E W

    2001-01-01

    After whiplash trauma, up to one fourth of patients develop chronic symptoms including head and neck pain and cognitive disturbances. Resting perfusion single-photon-emission computed tomography (SPECT) found decreased temporoparietooccipital tracer uptake among these long-term symptomatic patients with late whiplash syndrome. As MT/MST (V5/V5a) are located in that area, this study addressed the question whether these patients show impairments in visual motion perception. We examined five symptomatic patients with late whiplash syndrome, five asymptomatic patients after whiplash trauma, and a control group of seven volunteers without the history of trauma. Tests for visual motion perception and functional magnetic resonance imaging (fMRI) measurements during visual motion stimulation were performed. Symptomatic patients showed a significant reduction in their ability to perceive coherent visual motion compared with controls, whereas the asymptomatic patients did not show this effect. fMRI activation was similar during random dot motion in all three groups, but was significantly decreased during coherent dot motion in the symptomatic patients compared with the other two groups. Reduced psychophysical motion performance and reduced fMRI responses in symptomatic patients with late whiplash syndrome both point to a functional impairment in cortical areas sensitive to coherent motion. Larger studies are needed to confirm these clinical and functional imaging results to provide a possible additional diagnostic criterion for the evaluation of patients with late whiplash syndrome.

  15. Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion

    PubMed Central

    Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-01

    The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411

  16. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  17. Visualization of Kepler's Laws of Planetary Motion

    ERIC Educational Resources Information Center

    Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong

    2017-01-01

    For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler's laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler's laws of planetary motion to be visualized and will contribute to improving the…

  18. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  19. fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.

    PubMed

    Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W

    2008-01-01

    Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.

  20. Impact of stride-coupled gaze shifts of walking blowflies on the neuronal representation of visual targets

    PubMed Central

    Kress, Daniel; Egelhaaf, Martin

    2014-01-01

    During locomotion animals rely heavily on visual cues gained from the environment to guide their behavior. Examples are basic behaviors like collision avoidance or the approach to a goal. The saccadic gaze strategy of flying flies, which separates translational from rotational phases of locomotion, has been suggested to facilitate the extraction of environmental information, because only image flow evoked by translational self-motion contains relevant distance information about the surrounding world. In contrast to the translational phases of flight during which gaze direction is kept largely constant, walking flies experience continuous rotational image flow that is coupled to their stride-cycle. The consequences of these self-produced image shifts for the extraction of environmental information are still unclear. To assess the impact of stride-coupled image shifts on visual information processing, we performed electrophysiological recordings from the HSE cell, a motion sensitive wide-field neuron in the blowfly visual system. This cell has been concluded to play a key role in mediating optomotor behavior, self-motion estimation and spatial information processing. We used visual stimuli that were based on the visual input experienced by walking blowflies while approaching a black vertical bar. The response of HSE to these stimuli was dominated by periodic membrane potential fluctuations evoked by stride-coupled image shifts. Nevertheless, during the approach the cell’s response contained information about the bar and its background. The response components evoked by the bar were larger than the responses to its background, especially during the last phase of the approach. However, as revealed by targeted modifications of the visual input during walking, the extraction of distance information on the basis of HSE responses is much impaired by stride-coupled retinal image shifts. Possible mechanisms that may cope with these stride-coupled responses are discussed. PMID:25309362

  1. From Computational Photobiology to the Design of Vibrationally Coherent Molecular Devices and Motors

    NASA Astrophysics Data System (ADS)

    Olivucci, Massimo

    2014-03-01

    In the past multi-configurational quantum chemical computations coupled with molecular mechanics force fields have been employed to investigate spectroscopic, thermal and photochemical properties of visual pigments. Here we show how the same computational technology can nowadays be used to design, characterize and ultimately, prepare light-driven molecular switches which mimics the photophysics of the visual pigment bovine rhodopsin (Rh). When embedded in the protein cavity the chromophore of Rh undergoes an ultrafast and coherent photoisomerization. In order to design a synthetic chromophore displaying similar properties in common solvents, we recently focused on indanylidene-pyrroline (NAIP) systems. We found that these systems display light-induced ground state coherent vibrational motion similar to the one detected in Rh. Semi-classical trajectories provide a mechanistic description of the structural changes associated to the observed coherent motion which is shown to be ultimately due to periodic changes in the π-conjugation.

  2. The perception of surface layout during low level flight

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1991-01-01

    Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed.

  3. Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation

    NASA Technical Reports Server (NTRS)

    O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.

    2006-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.

  4. Functional architecture of an optic flow-responsive area that drives horizontal eye movements in zebrafish.

    PubMed

    Kubo, Fumi; Hablitzel, Bastian; Dal Maschio, Marco; Driever, Wolfgang; Baier, Herwig; Arrenberg, Aristides B

    2014-03-19

    Animals respond to whole-field visual motion with compensatory eye and body movements in order to stabilize both their gaze and position with respect to their surroundings. In zebrafish, rotational stimuli need to be distinguished from translational stimuli to drive the optokinetic and the optomotor responses, respectively. Here, we systematically characterize the neural circuits responsible for these operations using a combination of optogenetic manipulation and in vivo calcium imaging during optic flow stimulation. By recording the activity of thousands of neurons within the area pretectalis (APT), we find four bilateral pairs of clusters that process horizontal whole-field motion and functionally classify eleven prominent neuron types with highly selective response profiles. APT neurons are prevalently direction selective, either monocularly or binocularly driven, and hierarchically organized to distinguish between rotational and translational optic flow. Our data predict a wiring diagram of a neural circuit tailored to drive behavior that compensates for self-motion. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Neural network architecture for form and motion perception (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Grossberg, Stephen

    1991-08-01

    Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.

  6. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  7. From elements to perception: local and global processing in visual neurons.

    PubMed

    Spillmann, L

    1999-01-01

    Gestalt psychologists in the early part of the century challenged psychophysical notions that perceptual phenomena can be understood from a punctate (atomistic) analysis of the elements present in the stimulus. Their ideas slowed later attempts to explain vision in terms of single-cell recordings from individual neurons. A rapprochement between Gestalt phenomenology and neurophysiology seemed unlikely when the first ECVP was held in Marburg, Germany, in 1978. Since that time, response properties of neurons have been discovered that invite an interpretation of visual phenomena (including illusions) in terms of neuronal processing by long-range interactions, as first proposed by Mach and Hering in the last century. This article traces a personal journey into the early days of neurophysiological vision research to illustrate the progress that has taken place from the first attempts to correlate single-cell responses with visual perceptions. Whereas initially the receptive-field properties of individual classes of cells--e.g., contrast, wavelength, orientation, motion, disparity, and spatial-frequency detectors--were used to account for relatively simple visual phenomena, nowadays complex perceptions are interpreted in terms of long-range interactions, involving many neurons. This change in paradigm from local to global processing was made possible by recent findings, in the cortex, on horizontal interactions and backward propagation (feedback loops) in addition to classical feedforward processing. These mechanisms are exemplified by studies of the tilt effect and tilt aftereffect, direction-specific motion adaptation, illusory contours, filling-in and fading, figure--ground segregation by orientation and motion contrast, and pop-out in dynamic visual-noise patterns. Major questions for future research and a discussion of their epistemological implications conclude the article.

  8. Vection and visually induced motion sickness: how are they related?

    PubMed Central

    Keshavarz, Behrang; Riecke, Bernhard E.; Hettinger, Lawrence J.; Campos, Jennifer L.

    2015-01-01

    The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future. PMID:25941509

  9. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    PubMed Central

    2014-01-01

    Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility. PMID:25276860

  10. Respiratory motion estimation in x-ray angiography for improved guidance during coronary interventions

    NASA Astrophysics Data System (ADS)

    Baka, N.; Lelieveldt, B. P. F.; Schultz, C.; Niessen, W.; van Walsum, T.

    2015-05-01

    During percutaneous coronary interventions (PCI) catheters and arteries are visualized by x-ray angiography (XA) sequences, using brief contrast injections to show the coronary arteries. If we could continue visualizing the coronary arteries after the contrast agent passed (thus in non-contrast XA frames), we could potentially lower contrast use, which is advantageous due to the toxicity of the contrast agent. This paper explores the possibility of such visualization in mono-plane XA acquisitions with a special focus on respiratory based coronary artery motion estimation. We use the patient specific coronary artery centerlines from pre-interventional 3D CTA images to project on the XA sequence for artery visualization. To achieve this, a framework for registering the 3D centerlines with the mono-plane 2D + time XA sequences is presented. During the registration the patient specific cardiac and respiratory motion is learned. We investigate several respiratory motion estimation strategies with respect to accuracy, plausibility and ease of use for motion prediction in XA frames with and without contrast. The investigated strategies include diaphragm motion based prediction, and respiratory motion extraction from the guiding catheter tip motion. We furthermore compare translational and rigid respiratory based heart motion. We validated the accuracy of the 2D/3D registration and the respiratory and cardiac motion estimations on XA sequences of 12 interventions. The diaphragm based motion model and the catheter tip derived motion achieved 1.58 mm and 1.83 mm median 2D accuracy, respectively. On a subset of four interventions we evaluated the artery visualization accuracy for non-contrast cases. Both diaphragm, and catheter tip based prediction performed similarly, with about half of the cases providing satisfactory accuracy (median error < 2 mm).

  11. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  12. Representation of visual gravitational motion in the human vestibular cortex.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco

    2005-04-15

    How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.

  13. Multiplexing in the primate motion pathway.

    PubMed

    Huk, Alexander C

    2012-06-01

    This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.

  14. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  15. Vection in patients with glaucoma.

    PubMed

    Tarita-Nistor, Luminita; Hadavi, Shahriar; Steinbach, Martin J; Markowitz, Samuel N; González, Esther G

    2014-05-01

    Large moving scenes can induce a sensation of self-motion in stationary observers. This illusion is called "vection." Glaucoma progressively affects the functioning of peripheral vision, which plays an important role in inducing vection. It is still not known whether vection can be induced in these patients and, if it can, whether the interaction between visual and vestibular inputs is solved appropriately. The aim of this study was to investigate vection responses in patients with mild to moderate open-angle glaucoma. Fifteen patients with mild to moderate glaucoma and 15 age-matched controls were exposed to a random-dot pattern at a short viewing distance and in a dark room. The pattern was projected on a large screen and rotated clockwise with an angular speed of 45 degrees per second to induce a sensation of self-rotation. Vection latency, vection duration, and objective and subjective measures of tilt were obtained in three viewing conditions (binocular, and monocular with each eye). Each condition lasted 2 minutes. Patients with glaucoma had longer vection latencies (p = 0.005) than, but the same vection duration as, age-matched controls. Viewing condition did not affect vection responses for either group. The control group estimated the tilt angle as being significantly larger than the actual maximum tilt angle measured with the tilt sensor (p = 0.038). There was no relationship between vection measures and visual field sensitivity for the glaucoma group. These findings suggest that, despite an altered visual input that delays vection, the neural responses involved in canceling the illusion of self-motion remain intact in patients with mild peripheral visual field loss.

  16. Decoding information about dynamically occluded objects in visual cortex

    PubMed Central

    Erlikhman, Gennady; Caplovitz, Gideon P.

    2016-01-01

    During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder, even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or, non-object-specific information such as its position or velocity as it is tracked behind an occluder as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine the representation of information within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may represent the dynamically occluded object’s position or motion path, while later visual areas represent object-specific information. PMID:27663987

  17. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon

    PubMed Central

    Rieucau, Guillaume; Burke, Darren

    2017-01-01

    Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965

  18. Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery

    ERIC Educational Resources Information Center

    Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy

    2011-01-01

    A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…

  19. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  20. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  1. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    PubMed

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  2. Anticipating the effects of visual gravity during simulated self-motion: estimates of time-to-passage along vertical and horizontal paths.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Lacquaniti, Francesco

    2013-09-01

    By simulating self-motion on a virtual rollercoaster, we investigated whether acceleration cued by the optic flow affected the estimate of time-to-passage (TTP) to a target. In particular, we studied the role of a visual acceleration (1 g = 9.8 m/s(2)) simulating the effects of gravity in the scene, by manipulating motion law (accelerated or decelerated at 1 g, constant speed) and motion orientation (vertical, horizontal). Thus, 1-g-accelerated motion in the downward direction or decelerated motion in the upward direction was congruent with the effects of visual gravity. We found that acceleration (positive or negative) is taken into account but is overestimated in module in the calculation of TTP, independently of orientation. In addition, participants signaled TTP earlier when the rollercoaster accelerated downward at 1 g (as during free fall), with respect to when the same acceleration occurred along the horizontal orientation. This time shift indicates an influence of the orientation relative to visual gravity on response timing that could be attributed to the anticipation of the effects of visual gravity on self-motion along the vertical, but not the horizontal orientation. Finally, precision in TTP estimates was higher during vertical fall than when traveling at constant speed along the vertical orientation, consistent with a higher noise in TTP estimates when the motion violates gravity constraints.

  3. An Evaluation of the Frequency and Severity of Motion Sickness Incidences in Personnel Within the Command and Control Vehicle (C2V)

    NASA Technical Reports Server (NTRS)

    Cowings, Patricia S.; Toscano, William B.; DeRoshia, Charles

    1998-01-01

    The purpose of this study was to assess the frequency and severity of motion sickness in personnel during a field exercise in the Command and Control Vehicle (C2V). This vehicle contains four workstations where military personnel are expected to perform command decisions in the field during combat conditions. Eight active duty military men (U.S. Army) at the Yuma Proving Grounds in Arizona participated in this study. All subjects were given baseline performance tests while their physiological responses were monitored on the first day. On the second day of their participation, subjects rode in the C2V while their physiological responses and performance measures were recorded. Self-reports of motion sickness were also recorded. Results showed that only one subject experienced two incidences of emesis. However, seven out of the eight subjects reported other motion sickness symptoms; most predominant was the report of drowsiness, which occurred a total of 19 times. Changes in physiological responses were observed relative to motion sickness symptoms reported and the different environmental conditions (i.e., level, hills, gravel) during the field exercise. Performance data showed an overall decrement during the C2V exercise. These findings suggest that malaise and severe drowsiness can potentially impact the operational efficiency of the C2V crew. It was concluded that conflicting sensory information from the subject's visual displays and movements of the vehicle during the field exercise significantly contributed to motion sickness symptoms. It was recommended that a second study be conducted to further evaluate the impact of seat position or orientation and C2V experience on motion sickness susceptibility. Further, it was recommended that an investigation be performed on behavioral methods for improving crew alertness, motivation, and performance and for reducing malaise.

  4. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  5. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  6. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  7. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development.

    PubMed

    Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin

    2017-06-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Effects of Vibrotactile Feedback on Human Learning of Arm Motions

    PubMed Central

    Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.

    2015-01-01

    Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644

  9. A characterization of Parkinson's disease by describing the visual field motion during gait

    NASA Astrophysics Data System (ADS)

    Trujillo, David; Martínez, Fabio; Atehortúa, Angélica; Alvarez, Charlens; Romero, Eduardo

    2015-12-01

    An early diagnosis of Parkinson's Disease (PD) is crucial towards devising successful rehabilitation programs. Typically, the PD diagnosis is performed by characterizing typical symptoms, namely bradykinesia, rigidity, tremor, postural instability or freezing gait. However, traditional examination tests are usually incapable of detecting slight motor changes, specially for early stages of the pathology. Recently, eye movement abnormalities have correlated with early onset of some neurodegenerative disorders. This work introduces a new characterization of the Parkinson disease by describing the ocular motion during a common daily activity as the gait. This paper proposes a fully automatic eye motion analysis using a dense optical flow that tracks the ocular direction. The eye motion is then summarized using orientation histograms constructed during a whole gait cycle. The proposed approach was evaluated by measuring the χ2 distance between the orientation histograms, showing substantial differences between control and PD patients.

  10. Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning

    PubMed Central

    Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka

    2012-01-01

    Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849

  11. A neural model of border-ownership from kinetic occlusion.

    PubMed

    Layton, Oliver W; Yazdanbakhsh, Arash

    2015-01-01

    Camouflaged animals that have very similar textures to their surroundings are difficult to detect when stationary. However, when an animal moves, humans readily see a figure at a different depth than the background. How do humans perceive a figure breaking camouflage, even though the texture of the figure and its background may be statistically identical in luminance? We present a model that demonstrates how the primate visual system performs figure-ground segregation in extreme cases of breaking camouflage based on motion alone. Border-ownership signals develop as an emergent property in model V2 units whose receptive fields are nearby kinetically defined borders that separate the figure and background. Model simulations support border-ownership as a general mechanism by which the visual system performs figure-ground segregation, despite whether figure-ground boundaries are defined by luminance or motion contrast. The gradient of motion- and luminance-related border-ownership signals explains the perceived depth ordering of the foreground and background surfaces. Our model predicts that V2 neurons, which are sensitive to kinetic edges, are selective to border-ownership (magnocellular B cells). A distinct population of model V2 neurons is selective to border-ownership in figures defined by luminance contrast (parvocellular B cells). B cells in model V2 receive feedback from neurons in V4 and MT with larger receptive fields to bias border-ownership signals toward the figure. We predict that neurons in V4 and MT sensitive to kinetically defined figures play a crucial role in determining whether the foreground surface accretes, deletes, or produces a shearing motion with respect to the background. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    PubMed Central

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  13. Spatial limitations of fast temporal segmentation are best modeled by V1 receptive fields.

    PubMed

    Goodbourn, Patrick T; Forte, Jason D

    2013-11-22

    The fine temporal structure of events influences the spatial grouping and segmentation of visual-scene elements. Although adjacent regions flickering asynchronously at high temporal frequencies appear identical, the visual system signals a boundary between them. These "phantom contours" disappear when the gap between regions exceeds a critical value (g(max)). We used g(max) as an index of neuronal receptive-field size to compare with known receptive-field data from along the visual pathway and thus infer the location of the mechanism responsible for fast temporal segmentation. Observers viewed a circular stimulus reversing in luminance contrast at 20 Hz for 500 ms. A gap of constant retinal eccentricity segmented each stimulus quadrant; on each trial, participants identified a target quadrant containing counterphasing inner and outer segments. Through varying the gap width, g(max) was determined at a range of retinal eccentricities. We found that g(max) increased from 0.3° to 0.8° for eccentricities from 2° to 12°. These values correspond to receptive-field diameters of neurons in primary visual cortex that have been reported in single-cell and fMRI studies and are consistent with the spatial limitations of motion detection. In a further experiment, we found that modulation sensitivity depended critically on the length of the contour and could be predicted by a simple model of spatial summation in early cortical neurons. The results suggest that temporal segmentation is achieved by neurons at the earliest cortical stages of visual processing, most likely in primary visual cortex.

  14. Orientation of selective effects of body tilt on visually induced perception of self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.

  15. Motion visualization and estimation for flapping wing systems

    NASA Astrophysics Data System (ADS)

    Hsu, Tzu-Sheng Shane; Fitzgerald, Timothy; Nguyen, Vincent Phuc; Patel, Trisha; Balachandran, Balakumar

    2017-04-01

    Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. As a case study, motion capture of a free flying Manduca sexta, also known as hawkmoth, is considered by using three synchronized high-speed cameras. A solid finite element (FE) representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. One of the original aspects of this work is the formulation of an objective function and the use of shadow matching and strain-energy regularization. With this objective function, the authors penalize the projection differences between silhouettes of the captured images and the FE representation of the deformed body. The process and procedures undertaken to go from high-speed videography to motion estimation are discussed, and snapshots of representative results are presented. Finally, the captured free-flight motion is also characterized and quantified.

  16. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  17. Curvilinear approach to an intersection and visual detection of a collision.

    PubMed

    Berthelon, C; Mestre, D

    1993-09-01

    Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.

  18. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  19. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  20. Semantic bifurcated importance field visualization

    NASA Astrophysics Data System (ADS)

    Lindahl, Eric; Petrov, Plamen

    2007-04-01

    While there are many good ways to map sensual reality to two dimensional displays, mapping non-physical and possibilistic information can be challenging. The advent of faster-than-real-time systems allow the predictive and possibilistic exploration of important factors that can affect the decision maker. Visualizing a compressed picture of the past and possible factors can assist the decision maker summarizing information in a cognitive based model thereby reducing clutter and perhaps related decision times. Our proposed semantic bifurcated importance field visualization uses saccadic eye motion models to partition the display into a possibilistic and sensed data vertically and spatial and semantic data horizontally. Saccadic eye movement precedes and prepares decision makers before nearly every directed action. Cognitive models for saccadic eye movement show that people prefer lateral to vertical saccadic movement. Studies have suggested that saccades may be coupled to momentary problem solving strategies. Also, the central 1.5 degrees of the visual field represents 100 times greater resolution that then peripheral field so concentrating factors can reduce unnecessary saccades. By packing information according to saccadic models, we can relate important decision factors reduce factor dimensionality and present the dense summary dimensions of semantic and importance. Inter and intra ballistics of the SBIFV provide important clues on how semantic packing assists in decision making. Future directions of SBIFV are to make the visualization reactive and conformal to saccades specializing targets to ballistics, such as dynamically filtering and highlighting verbal targets for left saccades and spatial targets for right saccades.

  1. Visualization of 3D elbow kinematics using reconstructed bony surfaces

    NASA Astrophysics Data System (ADS)

    Lalone, Emily A.; McDonald, Colin P.; Ferreira, Louis M.; Peters, Terry M.; King, Graham J. W.; Johnson, James A.

    2010-02-01

    An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator. Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.

  2. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  3. The contribution of visual and proprioceptive information to the perception of leaning in a dynamic motorcycle simulator.

    PubMed

    Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice

    2016-11-01

    Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.

  4. Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2.

    PubMed

    Zarella, Mark D; Ts'o, Daniel Y

    2017-01-01

    Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure-ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure-ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure-ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization.

  5. Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2

    PubMed Central

    Zarella, Mark D; Ts’o, Daniel Y

    2017-01-01

    Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure–ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure–ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure–ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization. PMID:28761385

  6. High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yingzheng; He, Chuangxin

    2016-11-01

    In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.

  7. Differential Responses to a Visual Self-Motion Signal in Human Medial Cortical Regions Revealed by Wide-View Stimulation

    PubMed Central

    Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi

    2016-01-01

    Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588

  8. Visualizing individual microtubules by bright field microscopy

    NASA Astrophysics Data System (ADS)

    Gutiérrez-Medina, Braulio; Block, Steven M.

    2010-11-01

    Microtubules are slender (˜25 nm diameter), filamentous polymers involved in cellular structure and organization. Individual microtubules have been visualized via fluorescence imaging of dye-labeled tubulin subunits and by video-enhanced, differential interference-contrast microscopy of unlabeled polymers using sensitive CCD cameras. We demonstrate the imaging of unstained microtubules using a microscope with conventional bright field optics in conjunction with a webcam-type camera and a light-emitting diode illuminator. The light scattered by microtubules is image-processed to remove the background, reduce noise, and enhance contrast. The setup is based on a commercial microscope with a minimal set of inexpensive components, suitable for implementation in a student laboratory. We show how this approach can be used in a demonstration motility assay, tracking the gliding motions of microtubules driven by the motor protein kinesin.

  9. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    PubMed

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  10. Analysis of free breathing motion using artifact reduced 4D CT image data

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Werner, Rene; Frenzel, Thorsten; Lu, Wei; Low, Daniel; Handels, Heinz

    2007-03-01

    The mobility of lung tumors during the respiratory cycle is a source of error in radiotherapy treatment planning. Spatiotemporal CT data sets can be used for studying the motion of lung tumors and inner organs during the breathing cycle. We present methods for the analysis of respiratory motion using 4D CT data in high temporal resolution. An optical flow based reconstruction method was used to generate artifact-reduced 4D CT data sets of lung cancer patients. The reconstructed 4D CT data sets were segmented and the respiratory motion of tumors and inner organs was analyzed. A non-linear registration algorithm is used to calculate the velocity field between consecutive time frames of the 4D data. The resulting velocity field is used to analyze trajectories of landmarks and surface points. By this technique, the maximum displacement of any surface point is calculated, and regions with large respiratory motion are marked. To describe the tumor mobility the motion of the lung tumor center in three orthogonal directions is displayed. Estimated 3D appearance probabilities visualize the movement of the tumor during the respiratory cycle in one static image. Furthermore, correlations between trajectories of the skin surface and the trajectory of the tumor center are determined and skin regions are identified which are suitable for prediction of the internal tumor motion. The results of the motion analysis indicate that the described methods are suitable to gain insight into the spatiotemporal behavior of anatomical and pathological structures during the respiratory cycle.

  11. Visual cortical activity reflects faster accumulation of information from cortically blind fields

    PubMed Central

    Martin, Tim; Das, Anasuya; Huxlin, Krystel R.

    2012-01-01

    Brain responses (from functional magnetic resonance imaging) and components of information processing were investigated in nine cortically blind observers performing a global direction discrimination task. Three of these subjects had responses in perilesional cortex in response to blind field stimulation, whereas the others did not. We used the EZ-diffusion model of decision making to understand how cortically blind subjects make a perceptual decision on stimuli presented within their blind field. We found that these subjects had slower accumulation of information in their blind fields as compared with their good fields and to intact controls. Within cortically blind subjects, activity in perilesional tissue, V3A and hMT+ was associated with a faster accumulation of information for deciding direction of motion of stimuli presented in the blind field. This result suggests that the rate of information accumulation is a critical factor in the degree of impairment in cortical blindness and varies greatly among affected individuals. Retraining paradigms that seek to restore visual functions might benefit from focusing on increasing the rate of information accumulation. PMID:23169923

  12. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  13. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  14. The search for instantaneous vection: An oscillating visual prime reduces vection onset latency.

    PubMed

    Palmisano, Stephen; Riecke, Bernhard E

    2018-01-01

    Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ("vection"). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost.

  15. The search for instantaneous vection: An oscillating visual prime reduces vection onset latency

    PubMed Central

    Riecke, Bernhard E.

    2018-01-01

    Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion (“vection”). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost. PMID:29791445

  16. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  17. Ageing vision and falls: a review.

    PubMed

    Saftari, Liana Nafisa; Kwon, Oh-Sang

    2018-04-23

    Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.

  18. Effects of feature-based attention on the motion aftereffect at remote locations.

    PubMed

    Boynton, Geoffrey M; Ciaramitaro, Vivian M; Arman, A Cyrus

    2006-09-01

    Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.

  19. New insights into the role of motion and form vision in neurodevelopmental disorders.

    PubMed

    Johnston, Richard; Pitchford, Nicola J; Roach, Neil W; Ledgeway, Timothy

    2017-12-01

    A selective deficit in processing the global (overall) motion, but not form, of spatially extensive objects in the visual scene is frequently associated with several neurodevelopmental disorders, including preterm birth. Existing theories that proposed to explain the origin of this visual impairment are, however, challenged by recent research. In this review, we explore alternative hypotheses for why deficits in the processing of global motion, relative to global form, might arise. We describe recent evidence that has utilised novel tasks of global motion and global form to elucidate the underlying nature of the visual deficit reported in different neurodevelopmental disorders. We also examine the role of IQ and how the sex of an individual can influence performance on these tasks, as these are factors that are associated with performance on global motion tasks, but have not been systematically controlled for in previous studies exploring visual processing in clinical populations. Finally, we suggest that a new theoretical framework is needed for visual processing in neurodevelopmental disorders and present recommendations for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Real-world applications of artificial neural networks to cardiac monitoring using radar and recent theoretical developments

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Johnson, John L.; Vemuri, V. Rao

    1997-04-01

    This paper focuses on use of a new image filtering technique, Pulsed Coupled Neural Network factoring to enhance both the analysis and visual interpretation of noisy sinusoidal time signals, such as those produced by LLNL's Microwave Impulse Radar motion sensor. Separation of a slower, carrier wave from faster, finer detailed signals and from scattered noise is illustrated. The resulting images clearly illustrate the changes over time of simulated heart motion patterns. Such images can potentially assist a field medic in interpretation of the extent of combat injuries. These images can also be transmitted or stored and retrieved for later analysis.

  1. Helicopter flight simulation motion platform requirements

    NASA Astrophysics Data System (ADS)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  2. The notion of the motion: the neurocognition of motion lines in visual narratives.

    PubMed

    Cohn, Neil; Maher, Stephen

    2015-03-19

    Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the "streaks" appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the "vocabulary" of the visual language of comics. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The notion of the motion: The neurocognition of motion lines in visual narratives

    PubMed Central

    Cohn, Neil; Maher, Stephen

    2015-01-01

    Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the “streaks” appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the “vocabulary” of the visual language of comics. PMID:25601006

  4. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2013-09-01

    right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant

  5. Development of Visual Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants

    PubMed Central

    Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.

    2016-01-01

    During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908

  6. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  7. The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices

    PubMed Central

    An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei

    2014-01-01

    All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033

  8. Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.

    PubMed

    Ibbotson, M R

    2017-01-23

    The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Multisensory Self-Motion Compensation During Object Trajectory Judgments

    PubMed Central

    Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.

    2015-01-01

    Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317

  10. Pure visual imagery as a potential approach to achieve three classes of control for implementation of BCI in non-motor disorders

    NASA Astrophysics Data System (ADS)

    Sousa, Teresa; Amaral, Carlos; Andrade, João; Pires, Gabriel; Nunes, Urbano J.; Castelo-Branco, Miguel

    2017-08-01

    Objective. The achievement of multiple instances of control with the same type of mental strategy represents a way to improve flexibility of brain-computer interface (BCI) systems. Here we test the hypothesis that pure visual motion imagery of an external actuator can be used as a tool to achieve three classes of electroencephalographic (EEG) based control, which might be useful in attention disorders. Approach. We hypothesize that different numbers of imagined motion alternations lead to distinctive signals, as predicted by distinct motion patterns. Accordingly, a distinct number of alternating sensory/perceptual signals would lead to distinct neural responses as previously demonstrated using functional magnetic resonance imaging (fMRI). We anticipate that differential modulations should also be observed in the EEG domain. EEG recordings were obtained from twelve participants using three imagery tasks: imagery of a static dot, imagery of a dot with two opposing motions in the vertical axis (two motion directions) and imagery of a dot with four opposing motions in vertical or horizontal axes (four directions). The data were analysed offline. Main results. An increase of alpha-band power was found in frontal and central channels as a result of visual motion imagery tasks when compared with static dot imagery, in contrast with the expected posterior alpha decreases found during simple visual stimulation. The successful classification and discrimination between the three imagery tasks confirmed that three different classes of control based on visual motion imagery can be achieved. The classification approach was based on a support vector machine (SVM) and on the alpha-band relative spectral power of a small group of six frontal and central channels. Patterns of alpha activity, as captured by single-trial SVM closely reflected imagery properties, in particular the number of imagined motion alternations. Significance. We found a new mental task based on visual motion imagery with potential for the implementation of multiclass (3) BCIs. Our results are consistent with the notion that frontal alpha synchronization is related with high internal processing demands, changing with the number of alternation levels during imagery. Together, these findings suggest the feasibility of pure visual motion imagery tasks as a strategy to achieve multiclass control systems with potential for BCI and in particular, neurofeedback applications in non-motor (attentional) disorders.

  11. Visualization of Kepler’s laws of planetary motion

    NASA Astrophysics Data System (ADS)

    Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong

    2017-03-01

    For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler’s laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler’s laws of planetary motion to be visualized and will contribute to improving the manipulative ability of middle school students and the accessibility of classroom education.

  12. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  13. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  14. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  15. Form and motion make independent contributions to the response to biological motion in occipitotemporal cortex.

    PubMed

    Thompson, James C; Baccus, Wendy

    2012-01-02

    Psychophysical and computational studies have provided evidence that both form and motion cues are used in the perception of biological motion. However, neuroimaging and neurophysiological studies have suggested that the neural processing of actions in temporal cortex might rely on form cues alone. Here we examined the contribution of form and motion to the spatial pattern of response to biological motion in ventral and lateral occipitotemporal cortex, using functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA). We found that selectivity to intact versus scrambled biological motion in lateral occipitotemporal cortex was correlated with selectivity for bodies and not for motion. However, this appeared to be due to the fact that subtracting scrambled from intact biological motion removes any contribution of local motion cues. Instead, we found that form and motion made independent contributions to the spatial pattern of responses to biological motion in lateral occipitotemporal regions MT, MST, and the extrastriate body area. The motion contribution was position-dependent, and consistent with the representation of contra- and ipsilateral visual fields in MT and MST. In contrast, only form contributed to the response to biological motion in the fusiform body area, with a bias towards central versus peripheral presentation. These results indicate that the pattern of response to biological motion in ventral and lateral occipitotemporal cortex reflects the linear combination of responses to form and motion. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. New Perspectives on Active Tectonics: Observing Fault Motion, Mapping Earthquake Strain Fields, and Visualizing Seismic Events in Multiple Dimensions Using Satellite Imagery and Geophysical Data Base

    NASA Technical Reports Server (NTRS)

    Crippen, R.; Blom, R.

    1994-01-01

    By rapidly alternating displays of SPOT satellite images acquired on 27 July 1991 and 25 July 1992 we are able to see spatial details of terrain movements along fault breaks associated with the 28 June 1992 Landers, California earthquake that are virtually undetectable by any other means.

  17. Advances in Oculomotor and Vestibular Physiology

    DTIC Science & Technology

    1980-11-24

    and nucleus reticularis tegmenti pontis in serving as an input to the flocculus and in mediating visual-vestibular interactions. Karten showed direct...demonstrated that the region in an around the Interstitial Nucleus of Cajal is important for the generation of torsional or rolling eye movements. 3...demonstrated that activity related to full field motion reaches the flocculus of mammals over mossy fiber Lnpits that arise from cells in nucleus

  18. Instructional Materials for Teaching Audiovisual Courses; An Annotated List of Motion Pictures, Kinescopes, Filmstrips, Slidesets, Recordings, Tapes, and Prints

    ERIC Educational Resources Information Center

    Shockey, Carolyn, Ed.

    A catalog of audio and visual meterials for teaching courses on or illustrating all aspects of audiovisual instruction was developed with a broad coverage of those areas or interests pertinent to the field of instructional communications. The listings should be of value to the college instructor in the area of instructional materials, as well as…

  19. Figure-ground modulation in awake primate thalamus.

    PubMed

    Jones, Helen E; Andolina, Ian M; Shipp, Stewart D; Adams, Daniel L; Cudeiro, Javier; Salt, Thomas E; Sillito, Adam M

    2015-06-02

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process.

  20. Figure-ground modulation in awake primate thalamus

    PubMed Central

    Jones, Helen E.; Andolina, Ian M.; Shipp, Stewart D.; Adams, Daniel L.; Cudeiro, Javier; Salt, Thomas E.; Sillito, Adam M.

    2015-01-01

    Figure-ground discrimination refers to the perception of an object, the figure, against a nondescript background. Neural mechanisms of figure-ground detection have been associated with feedback interactions between higher centers and primary visual cortex and have been held to index the effect of global analysis on local feature encoding. Here, in recordings from visual thalamus of alert primates, we demonstrate a robust enhancement of neuronal firing when the figure, as opposed to the ground, component of a motion-defined figure-ground stimulus is located over the receptive field. In this paradigm, visual stimulation of the receptive field and its near environs is identical across both conditions, suggesting the response enhancement reflects higher integrative mechanisms. It thus appears that cortical activity generating the higher-order percept of the figure is simultaneously reentered into the lowest level that is anatomically possible (the thalamus), so that the signature of the evolving representation of the figure is imprinted on the input driving it in an iterative process. PMID:25901330

  1. Perceptual grouping across eccentricity.

    PubMed

    Tannazzo, Teresa; Kurylo, Daniel D; Bukhari, Farhan

    2014-10-01

    Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. The spiral aftereffect : III, Some effects of perceived size, retinal size, and retinal speed on the duration of illusory motion.

    DOT National Transportation Integrated Search

    1971-07-01

    Many safety problems encountered in aviation have been attributed to visual illusions. One of the various types of visual illusions, that of apparent motion, includes as an aftereffect the apparent reversed motion of an object after it ceases real mo...

  3. The analysis of image motion by the rabbit retina

    PubMed Central

    Oyster, C. W.

    1968-01-01

    1. Micro-electrode recordings were made from rabbit retinal ganglion cells or their axons. Of particular interest were direction-selective units; the common on—off type represented 20·6% of the total sample (762 units), and the on-type comprised 5% of the total. 2. From the large sample of direction-selective units, it was found that on—off units were maximally sensitive to only four directions of movement; these directions, in the visual field, were, roughly, anterior, superior, posterior and inferior. The on-type units were maximally sensitive to only three directions: anterior, superior and inferior. 3. The direction-selective unit's responses vary with stimulus velocity; both unit types are more sensitive to velocity change than to absolute speed. On—off units respond to movement at speeds from 6′/sec to 10°/sec; the on-type units responded as slowly as 30″/sec up to about 2°/sec. On-type units are clearly slow-movement detectors. 4. The distribution of direction-selective units depends on the retinal locality. On—off units are more common outside the `visual streak' (area centralis) than within it, while the reverse is true for the on-type units. 5. A stimulus configuration was found which would elicit responses from on-type units when the stimulus was moved in the null direction. This `paradoxical response' was shown to be associated with the silent receptive field surround. 6. The four preferred directions of the on—off units were shown to correspond to the directions of retinal image motion produced by contractions of the four rectus eye muscles. This fact, combined with data on velocity sensitivity and retinal distribution of on—off units, suggests that the on—off units are involved in control of reflex eye movements. 7. The on—off direction-selective units may provide error signals to a visual servo system which minimizes retinal image motion. This hypothesis agrees with the known characteristics of the rabbit's visual following reflexes, specifically, the slow phase of optokinetic nystagmus. PMID:5710424

  4. Motion processing with two eyes in three dimensions.

    PubMed

    Rokers, Bas; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2011-02-11

    The movement of an object toward or away from the head is perhaps the most critical piece of information an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae. Little is known about how or where the visual system combines these two retinal motion signals, relative to the wealth of knowledge about the neural hierarchies involved in 2D motion processing and binocular vision. Canonical conceptions of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single cyclopean stream (lacking eye-of-origin information) and extract 1D ("component") motions; later stages then extract 2D pattern motion from the cyclopean output of the earlier stage. Here, however, we show that 3D motion perception is in fact affected by the comparison of opposite 2D pattern motions between the two eyes. Three-dimensional motion sensitivity depends systematically on pattern motion direction when dichoptically viewing gratings and plaids-and a novel "dichoptic pseudoplaid" stimulus provides strong support for use of interocular pattern motion differences by precluding potential contributions from conventional disparity-based mechanisms. These results imply the existence of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such eye-specific pattern-motion signals in models of motion processing and binocular integration.

  5. A coarse-to-fine kernel matching approach for mean-shift based visual tracking

    NASA Astrophysics Data System (ADS)

    Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.

    2009-03-01

    Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.

  6. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  7. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  8. Validation of the Passenger Ride Quality Apparatus (PRQA) for simulation of aircraft motions for ride-quality research

    NASA Technical Reports Server (NTRS)

    Bigler, W. B., II

    1977-01-01

    The NASA passenger ride quality apparatus (PRQA), a ground based motion simulator, was compared to the total in flight simulator (TIFS). Tests were made on PRQA with varying stimuli: motions only; motions and noise; motions, noise, and visual; and motions and visual. Regression equations for the tests were obtained and subsequent t-testing of the slopes indicated that ground based simulator tests produced comfort change rates similar to actual flight data. It was recommended that PRQA be used in the ride quality program for aircraft and that it be validated for other transportation modes.

  9. Parallax visualization of full motion video using the Pursuer GUI

    NASA Astrophysics Data System (ADS)

    Mayhew, Christopher A.; Forgues, Mark B.

    2014-06-01

    In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.

  10. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease.

    PubMed

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.

  11. Kinesthetic information disambiguates visual motion signals.

    PubMed

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Flies and humans share a motion estimation strategy that exploits natural scene statistics

    PubMed Central

    Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.

    2014-01-01

    Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225

  13. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  14. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  15. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  16. Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.

    PubMed

    Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp

    2012-07-30

    Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.

  17. Figure-ground segregation by motion contrast and by luminance contrast.

    PubMed

    Regan, D; Beverley, K I

    1984-05-01

    Some naturally camouflaged objects are invisible unless they move; their boundaries are then defined by motion contrast between object and background. We compared the visual detection of such camouflaged objects with the detection of objects whose boundaries were defined by luminance contrast. The summation field area is 0.16 deg2 , and the summation time constant is 750 msec for parafoveally viewed objects whose boundaries are defined by motion contrast; these values are, respectively, about 5 and 12 times larger than the corresponding values for objects defined by luminance contrast. The log detection threshold is proportional to the eccentricity for a camouflaged object of constant area. The effect of eccentricity on threshold is less for large objects than for small objects. The log summation field diameter for detecting camouflaged objects is roughly proportional to the eccentricity, increasing to about 20 deg at 32-deg eccentricity. In contrast to the 100:1 increase of summation area for detecting camouflaged objects, the temporal summation time constant changes by only 40% between eccentricities of 0 and 16 deg.

  18. Decreased susceptibility to motion sickness during exposure to visual inversion in microgravity

    NASA Technical Reports Server (NTRS)

    Lackner, James R.; Dizio, Paul

    1991-01-01

    Head and body movements made in microgravity tend to bring on symptoms of motion sickness. Such head movements, relative to comparable ones made on earth, are accompanied by unusual combinations of semicircular canal and otolith activity owing to the unloading of the otoliths in 0G. Head movements also bring on symptoms of motion sickness during exposure to visual inversion (or reversal) on earth because the vestibulo-ocular reflex is rendered anti-compensatory. Here, evidence is presented that susceptibility to motion sickness during exposure to visual inversion is decreased in a 0G relative to 1G force background. This difference in susceptibility appears related to the alteration in otolith function in 0G. Some implications of this finding for the etiology of space motion sickness are described.

  19. The spiral aftereffect. II, Some influences of visual angle and retinal speed on the duration and intensity of illusory motion.

    DOT National Transportation Integrated Search

    1969-08-01

    Visual illusions have been a persistent problem in aviation research. The spiral aftereffect (SAE) is an example of one type of visual illusion--that which occurs following the cessation of real motion. Duration and intensity of the SAE was evaluated...

  20. Visual-Vestibular Conflict Detection Depends on Fixation.

    PubMed

    Garzorz, Isabelle T; MacNeilage, Paul R

    2017-09-25

    Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Visual Occlusion Decreases Motion Sickness in a Flight Simulator.

    PubMed

    Ishak, Shaziela; Bubka, Andrea; Bonato, Frederick

    2018-05-01

    Sensory conflict theories of motion sickness (MS) assert that symptoms may result when incoming sensory inputs (e.g., visual and vestibular) contradict each other. Logic suggests that attenuating input from one sense may reduce conflict and hence lessen MS symptoms. In the current study, it was hypothesized that attenuating visual input by blocking light entering the eye would reduce MS symptoms in a motion provocative environment. Participants sat inside an aircraft cockpit mounted onto a motion platform that simultaneously pitched, rolled, and heaved in two conditions. In the occluded condition, participants wore "blackout" goggles and closed their eyes to block light. In the control condition, participants opened their eyes and had full view of the cockpit's interior. Participants completed separate Simulator Sickness Questionnaires before and after each condition. The posttreatment total Simulator Sickness Questionnaires and subscores for nausea, oculomotor, and disorientation in the control condition were significantly higher than those in the occluded condition. These results suggest that under some conditions attenuating visual input may delay the onset of MS or weaken the severity of symptoms. Eliminating visual input may reduce visual/nonvisual sensory conflict by weakening the influence of the visual channel, which is consistent with the sensory conflict theory of MS.

  2. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  3. Rhesus Monkeys Behave As If They Perceive the Duncker Illusion

    PubMed Central

    Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.

    2008-01-01

    The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233

  4. Attention to the Color of a Moving Stimulus Modulates Motion-Signal Processing in Macaque Area MT: Evidence for a Unified Attentional System.

    PubMed

    Katzner, Steffen; Busse, Laura; Treue, Stefan

    2009-01-01

    Directing visual attention to spatial locations or to non-spatial stimulus features can strongly modulate responses of individual cortical sensory neurons. Effects of attention typically vary in magnitude, not only between visual cortical areas but also between individual neurons from the same area. Here, we investigate whether the size of attentional effects depends on the match between the tuning properties of the recorded neuron and the perceptual task at hand. We recorded extracellular responses from individual direction-selective neurons in the middle temporal area (MT) of rhesus monkeys trained to attend either to the color or the motion signal of a moving stimulus. We found that effects of spatial and feature-based attention in MT, which are typically observed in tasks allocating attention to motion, were very similar even when attention was directed to the color of the stimulus. We conclude that attentional modulation can occur in extrastriate cortex, even under conditions without a match between the tuning properties of the recorded neuron and the perceptual task at hand. Our data are consistent with theories of object-based attention describing a transfer of attention from relevant to irrelevant features, within the attended object and across the visual field. These results argue for a unified attentional system that modulates responses to a stimulus across cortical areas, even if a given area is specialized for processing task-irrelevant aspects of that stimulus.

  5. Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity.

    PubMed

    Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L

    2016-08-02

    Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.

  6. Intuitive representation of surface properties of biomolecules using BioBlender.

    PubMed

    Andrei, Raluca Mihaela; Callieri, Marco; Zini, Maria Francesca; Loni, Tiziana; Maraziti, Giuseppe; Pan, Mike Chen; Zoppè, Monica

    2012-03-28

    In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu.

  7. Intuitive representation of surface properties of biomolecules using BioBlender

    PubMed Central

    2012-01-01

    Background In living cells, proteins are in continuous motion and interaction with the surrounding medium and/or other proteins and ligands. These interactions are mediated by protein features such as electrostatic and lipophilic potentials. The availability of protein structures enables the study of their surfaces and surface characteristics, based on atomic contribution. Traditionally, these properties are calculated by physico-chemical programs and visualized as range of colors that vary according to the tool used and imposes the necessity of a legend to decrypt it. The use of color to encode both characteristics makes the simultaneous visualization almost impossible, requiring these features to be visualized in different images. In this work, we describe a novel and intuitive code for the simultaneous visualization of these properties. Methods Recent advances in 3D animation and rendering software have not yet been exploited for the representation of biomolecules in an intuitive, animated form. For our purpose we use Blender, an open-source, free, cross-platform application used professionally for 3D work. On the basis Blender, we developed BioBlender, dedicated to biological work: elaboration of protein motion with simultaneous visualization of their chemical and physical features. Electrostatic and lipophilic potentials are calculated using physico-chemical software and scripts, organized and accessed through BioBlender interface. Results A new visual code is introduced for molecular lipophilic potential: a range of optical features going from smooth-shiny for hydrophobic regions to rough-dull for hydrophilic ones. Electrostatic potential is represented as animated line particles that flow along field lines, proportional to the total charge of the protein. Conclusions Our system permits visualization of molecular features and, in the case of moving proteins, their continuous perception, calculated for each conformation during motion. Using real world tactile/sight feelings, the nanoscale world of proteins becomes more understandable, familiar to our everyday life, making it easier to introduce "un-seen" phenomena (concepts) such as hydropathy or charges. Moreover, this representation contributes to gain insight into molecular functions by drawing viewer's attention to the most active regions of the protein. The program, available for Windows, Linux and MacOS, can be downloaded freely from the dedicated website http://www.bioblender.eu PMID:22536962

  8. Rapid sequence magnetic resonance imaging in the assessment of children with hydrocephalus.

    PubMed

    O'Neill, Brent R; Pruthi, Sumit; Bains, Harmanjeet; Robison, Ryan; Weir, Keiko; Ojemann, Jeff; Ellenbogen, Richard; Avellino, Anthony; Browd, Samuel R

    2013-12-01

    Recent reports have shown the utility of rapid-acquisition magnetic resonance imaging (MRI) in the evaluation of children with hydrocephalus. Rapid sequence MRI (RS-MRI) acquires clinically useful images in seconds without exposing children to the risks of ionizing radiation or sedation. We review our experience with RS-MRI in children with shunts. Overall image quality, cost, catheter visualization, motion artifact, and ventricular size were reviewed for all RS-MRI studies obtained at Seattle Children's Hospital during a 2-year period. Image acquisition time was 12-19 seconds, with sessions usually lasting less than 3 minutes. Image quality was very good or excellent in 94% of studies, whereas only one was graded as poor. Significant motion artifact was noted in 7%, whereas 77% had little or no motion artifact. Catheter visualization was good or excellent in 57%, poor in 36%, and misleading in 7%. Small ventricular size was correlated with poor catheter visualization (Spearman's ρ = 0.586; P < 0.00001). RS-MRI imaging cost ∼$650 more than conventional computed tomography (CT). Our study supports that RS-MRI is an adequate substitute that allows reduced use of CT imaging and resultant exposure to ionizing radiation. Catheter position visualization remains suboptimal when ventricles are small, but shunt malfunction can be adequately determined in most cases. The cost is significantly more than CT, but the potential for lifetime reduction in radiation exposure may justify this expense in children. Limitations include the risk of valve malfunction after repeated exposure to high magnetic fields and the need for reprogramming with many types of adjustable valves. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-01-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…

  10. The psychophysics of Visual Motion and Global form Processing in Autism

    ERIC Educational Resources Information Center

    Koldewyn, Kami; Whitney, David; Rivera, Susan M.

    2010-01-01

    Several groups have recently reported that people with autism may suffer from a deficit in visual motion processing and proposed that these deficits may be related to a general dorsal stream dysfunction. In order to test the dorsal stream deficit hypothesis, we investigated coherent and biological motion perception as well as coherent form…

  11. Contrast Sensitivity for Motion Detection and Direction Discrimination in Adolescents with Autism Spectrum Disorders and Their Siblings

    ERIC Educational Resources Information Center

    Koh, Hwan Cui; Milne, Elizabeth; Dobkins, Karen

    2010-01-01

    The magnocellular (M) pathway hypothesis proposes that impaired visual motion perception observed in individuals with Autism Spectrum Disorders (ASD) might be mediated by atypical functioning of the subcortical M pathway, as this pathway provides the bulk of visual input to cortical motion detectors. To test this hypothesis, we measured luminance…

  12. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  13. A Bio-Inspired, Motion-Based Analysis of Crowd Behavior Attributes Relevance to Motion Transparency, Velocity Gradients, and Motion Patterns

    PubMed Central

    Raudies, Florian; Neumann, Heiko

    2012-01-01

    The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd. PMID:23300930

  14. Visual Motion Perception and Visual Attentive Processes.

    DTIC Science & Technology

    1988-04-01

    88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical

  15. Encodings of implied motion for animate and inanimate object categories in the two visual pathways.

    PubMed

    Lu, Zhengang; Li, Xueting; Meng, Ming

    2016-01-15

    Previous research has proposed two separate pathways for visual processing: the dorsal pathway for "where" information vs. the ventral pathway for "what" information. Interestingly, the middle temporal cortex (MT) in the dorsal pathway is involved in representing implied motion from still pictures, suggesting an interaction between motion and object related processing. However, the relationship between how the brain encodes implied motion and how the brain encodes object/scene categories is unclear. To address this question, fMRI was used to measure activity along the two pathways corresponding to different animate and inanimate categories of still pictures with different levels of implied motion speed. In the visual areas of both pathways, activity induced by pictures of humans and animals was hardly modulated by the implied motion speed. By contrast, activity in these areas correlated with the implied motion speed for pictures of inanimate objects and scenes. The interaction between implied motion speed and stimuli category was significant, suggesting different encoding mechanisms of implied motion for animate-inanimate distinction. Further multivariate pattern analysis of activity in the dorsal pathway revealed significant effects of stimulus category that are comparable to the ventral pathway. Moreover, still pictures of inanimate objects/scenes with higher implied motion speed evoked activation patterns that were difficult to differentiate from those evoked by pictures of humans and animals, indicating a functional role of implied motion in the representation of object categories. These results provide novel evidence to support integrated encoding of motion and object categories, suggesting a rethink of the relationship between the two visual pathways. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Ultrafast electron microscopy integrated with a direct electron detection camera.

    PubMed

    Lee, Young Min; Kim, Young Jae; Kim, Ye-Jin; Kwon, Oh-Hoon

    2017-07-01

    In the past decade, we have witnessed the rapid growth of the field of ultrafast electron microscopy (UEM), which provides intuitive means to watch atomic and molecular motions of matter. Yet, because of the limited current of the pulsed electron beam resulting from space-charge effects, observations have been mainly made to periodic motions of the crystalline structure of hundreds of nanometers or higher by stroboscopic imaging at high repetition rates. Here, we develop an advanced UEM with robust capabilities for circumventing the present limitations by integrating a direct electron detection camera for the first time which allows for imaging at low repetition rates. This approach is expected to promote UEM to a more powerful platform to visualize molecular and collective motions and dissect fundamental physical, chemical, and materials phenomena in space and time.

  17. Modeling respiratory motion for reducing motion artifacts in 4D CT images.

    PubMed

    Zhang, Yongbin; Yang, Jinzhong; Zhang, Lifei; Court, Laurence E; Balter, Peter A; Dong, Lei

    2013-04-01

    Four-dimensional computed tomography (4D CT) images have been recently adopted in radiation treatment planning for thoracic and abdominal cancers to explicitly define respiratory motion and anatomy deformation. However, significant image distortions (artifacts) exist in 4D CT images that may affect accurate tumor delineation and the shape representation of normal anatomy. In this study, the authors present a patient-specific respiratory motion model, based on principal component analysis (PCA) of motion vectors obtained from deformable image registration, with the main goal of reducing image artifacts caused by irregular motion during 4D CT acquisition. For a 4D CT image set of a specific patient, the authors calculated displacement vector fields relative to a reference phase, using an in-house deformable image registration method. The authors then used PCA to decompose each of the displacement vector fields into linear combinations of principal motion bases. The authors have demonstrated that the regular respiratory motion of a patient can be accurately represented by a subspace spanned by three principal motion bases and their projections. These projections were parameterized using a spline model to allow the reconstruction of the displacement vector fields at any given phase in a respiratory cycle. Finally, the displacement vector fields were used to deform the reference CT image to synthesize CT images at the selected phase with much reduced image artifacts. The authors evaluated the performance of the in-house deformable image registration method using benchmark datasets consisting of ten 4D CT sets annotated with 300 landmark pairs that were approved by physicians. The initial large discrepancies across the landmark pairs were significantly reduced after deformable registration, and the accuracy was similar to or better than that reported by state-of-the-art methods. The proposed motion model was quantitatively validated on 4D CT images of a phantom and a lung cancer patient by comparing the synthesized images and the original images at different phases. The synthesized images matched well with the original images. The motion model was used to reduce irregular motion artifacts in the 4D CT images of three lung cancer patients. Visual assessment indicated that the proposed approach could reduce severe image artifacts. The shape distortions around the diaphragm and tumor regions were mitigated in the synthesized 4D CT images. The authors have derived a mathematical model to represent the regular respiratory motion from a patient-specific 4D CT set and have demonstrated its application in reducing irregular motion artifacts in 4D CT images. The authors' approach can mitigate shape distortions of anatomy caused by irregular breathing motion during 4D CT acquisition.

  18. Margins of stability in young adults with traumatic transtibial amputation walking in destabilizing environments✫

    PubMed Central

    Beltran, Eduardo J.; Dingwell, Jonathan B.; Wilken, Jason M.

    2014-01-01

    Understanding how lower-limb amputation affects walking stability, specifically in destabilizing environments, is essential for developing effective interventions to prevent falls. This study quantified mediolateral margins of stability (MOS) and MOS sub-components in young individuals with traumatic unilateral transtibial amputation (TTA) and young able-bodied individuals (AB). Thirteen AB and nine TTA completed five 3-minute walking trials in a Computer Assisted Rehabilitation ENvironment (CAREN) system under three each of three test conditions: no perturbations, pseudo-random mediolateral translations of the platform, and pseudo-random mediolateral translations of the visual field. Compared to the unperturbed trials, TTA exhibited increased mean MOS and MOS variability during platform and visual field perturbations (p < 0.010). Also, AB exhibited increased mean MOS during visual field perturbations and increased MOS variability during both platform and visual field perturbations (p < 0.050). During platform perturbations, TTA exhibited significantly greater values than AB for mean MOS (p < 0.050) and MOS variability (p < 0.050); variability of the lateral distance between the center of mass (COM) and base of support at initial contact (p < 0.005); mean and variability of the range of COM motion (p < 0.010); and variability of COM peak velocity (p < 0.050). As determined by mean MOS and MOS variability, young and otherwise healthy individuals with transtibial amputation achieved stability similar to that of their able-bodied counterparts during unperturbed and visually-perturbed walking. However, based on mean and variability of MOS, unilateral transtibial amputation was shown to have affected walking stability during platform perturbations. PMID:24444777

  19. Perception of biological motion from size-invariant body representations.

    PubMed

    Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E

    2015-01-01

    The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  20. Modulation of neuronal responses during covert search for visual feature conjunctions

    PubMed Central

    Buracas, Giedrius T.; Albright, Thomas D.

    2009-01-01

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385

  1. Modulation of neuronal responses during covert search for visual feature conjunctions.

    PubMed

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  2. Steady induction effects in geomagnetism. Part 1A: Steady motional induction of geomagnetic chaos

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1992-01-01

    Geomagnetic effects of magnetic induction by hypothetically steady fluid motion and steady magnetic flux diffusion near the top of Earth's core are investigated using electromagnetic theory, simple magnetic earth models, and numerical experiments with geomagnetic field models. The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation indicated by broad-scale models of the observed geomagnetic field is examined and solved. In Part 1, the steady surficial core flow estimation problem is solved in the context of the source-free mantle/frozen-flux core model. In the first paper (IA), the theory underlying such estimates is reviewed and some consequences of various kinematic and dynamic flow hypotheses are derived. For a frozen-flux core, fluid downwelling is required to change the mean square normal magnetic flux density averaged over the core-mantle boundary. For surficially geostrophic flow, downwelling implies poleward flow. The solution of the forward steady motional induction problem at the surface of a frozen-flux core is derived and found to be a fine, easily visualized example of deterministic chaos. Geomagnetic effects of statistically steady core surface flow may well dominate secular variation over several decades. Indeed, effects of persistent, if not steady, surficially geostrophic core flow are described which may help explain certain features of the present broad-scale geomagnetic field and perhaps paleomagnetic secular variation.

  3. Influence of a visual display and frequency of whole-body angular oscillation on incidence of motion sickness.

    PubMed

    Guedry, F E; Benson, A J; Moore, H J

    1982-06-01

    Visual search within a head-fixed display consisting of a 12 X 12 digit matrix is degraded by whole-body angular oscillation at 0.02 Hz (+/- 155 degrees/s peak velocity), and signs and symptoms of motion sickness are prominent in a number of individuals within a 5-min exposure. Exposure to 2.5 Hz (+/- 20 degrees/s peak velocity) produces equivalent degradation of the visual search task, but does not produce signs and symptoms of motion sickness within a 5-min exposure.

  4. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  5. A computational theory of visual receptive fields.

    PubMed

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.

  6. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  7. Event processing in the visual world: Projected motion paths during spoken sentence comprehension.

    PubMed

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-05-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Integration of visual and motion cues for flight simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.

  9. Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.

    1986-01-01

    A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.

  10. Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2014-07-01

    Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.

  11. Visual Depth from Motion Parallax and Eye Pursuit

    PubMed Central

    Stroyan, Keith; Nawrot, Mark

    2012-01-01

    A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531

  12. Accounting for direction and speed of eye motion in planning visually guided manual tracking.

    PubMed

    Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe

    2013-10-01

    Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.

  13. Evaluation of an organic light-emitting diode display for precise visual stimulation.

    PubMed

    Ito, Hiroyuki; Ogawa, Masaki; Sunaga, Shoji

    2013-06-11

    A new type of visual display for presentation of a visual stimulus with high quality was assessed. The characteristics of an organic light-emitting diode (OLED) display (Sony PVM-2541, 24.5 in.; Sony Corporation, Tokyo, Japan) were measured in detail from the viewpoint of its applicability to visual psychophysics. We found the new display to be superior to other display types in terms of spatial uniformity, color gamut, and contrast ratio. Changes in the intensity of luminance were sharper on the OLED display than those on a liquid crystal display. Therefore, such OLED displays could replace conventional cathode ray tube displays in vision research for high quality stimulus presentation. Benefits of using OLED displays in vision research were especially apparent in the fields of low-level vision, where precise control and description of the stimulus are needed, e.g., in mesopic or scotopic vision, color vision, and motion perception.

  14. Heading Tuning in Macaque Area V6.

    PubMed

    Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E

    2015-12-16

    Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.

  15. Posture-based processing in visual short-term memory for actions.

    PubMed

    Vicary, Staci A; Stevens, Catherine J

    2014-01-01

    Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.

  16. Adaptation of velocity encoding in synaptically coupled neurons in the fly visual system.

    PubMed

    Kalb, Julia; Egelhaaf, Martin; Kurtz, Rafael

    2008-09-10

    Although many adaptation-induced effects on neuronal response properties have been described, it is often unknown at what processing stages in the nervous system they are generated. We focused on fly visual motion-sensitive neurons to identify changes in response characteristics during prolonged visual motion stimulation. By simultaneous recordings of synaptically coupled neurons, we were able to directly compare adaptation-induced effects at two consecutive processing stages in the fly visual motion pathway. This allowed us to narrow the potential sites of adaptation effects within the visual system and to relate them to the properties of signal transfer between neurons. Motion adaptation was accompanied by a response reduction, which was somewhat stronger in postsynaptic than in presynaptic cells. We found that the linear representation of motion velocity degrades during adaptation to a white-noise velocity-modulated stimulus. This effect is caused by an increasingly nonlinear velocity representation rather than by an increase of noise and is similarly strong in presynaptic and postsynaptic neurons. In accordance with this similarity, the dynamics and the reliability of interneuronal signal transfer remained nearly constant. Thus, adaptation is mainly based on processes located in the presynaptic neuron or in more peripheral processing stages. In contrast, changes of transfer properties at the analyzed synapse or in postsynaptic spike generation contribute little to changes in velocity coding during motion adaptation.

  17. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  18. Global motion perception is associated with motor function in 2-year-old children.

    PubMed

    Thompson, Benjamin; McKinlay, Christopher J D; Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; Yu, Tzu-Ying; Ansell, Judith M; Wouldes, Trecia A; Harding, Jane E

    2017-09-29

    The dorsal visual processing stream that includes V1, motion sensitive area V5 and the posterior parietal lobe, supports visually guided motor function. Two recent studies have reported associations between global motion perception, a behavioural measure of processing in V5, and motor function in pre-school and school aged children. This indicates a relationship between visual and motor development and also supports the use of global motion perception to assess overall dorsal stream function in studies of human neurodevelopment. We investigated whether associations between vision and motor function were present at 2 years of age, a substantially earlier stage of development. The Bayley III test of Infant and Toddler Development and measures of vision including visual acuity (Cardiff Acuity Cards), stereopsis (Lang stereotest) and global motion perception were attempted in 404 2-year-old children (±4 weeks). Global motion perception (quantified as a motion coherence threshold) was assessed by observing optokinetic nystagmus in response to random dot kinematograms of varying coherence. Linear regression revealed that global motion perception was modestly, but statistically significantly associated with Bayley III composite motor (r 2 =0.06, P<0.001, n=375) and gross motor scores (r 2 =0.06, p<0.001, n=375). The associations remained significant when language score was included in the regression model. In addition, when language score was included in the model, stereopsis was significantly associated with composite motor and fine motor scores, but unaided visual acuity was not statistically significantly associated with any of the motor scores. These results demonstrate that global motion perception and binocular vision are associated with motor function at an early stage of development. Global motion perception can be used as a partial measure of dorsal stream function from early childhood. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

Top