Postural time-to-contact as a precursor of visually induced motion sickness.
Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A
2018-06-01
The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Illusory visual motion stimulus elicits postural sway in migraine patients
Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi
2015-01-01
Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832
Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K
2015-01-22
The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne
2017-04-01
We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.
Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
2015-08-15
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Spatio-Temporal Brain Mapping of Motion-Onset VEPs Combined with fMRI and Retinotopic Maps
Pitzalis, Sabrina; Strappini, Francesca; De Gasperis, Marco; Bultrini, Alessandro; Di Russo, Francesco
2012-01-01
Neuroimaging studies have identified several motion-sensitive visual areas in the human brain, but the time course of their activation cannot be measured with these techniques. In the present study, we combined electrophysiological and neuroimaging methods (including retinotopic brain mapping) to determine the spatio-temporal profile of motion-onset visual evoked potentials for slow and fast motion stimuli and to localize its neural generators. We found that cortical activity initiates in the primary visual area (V1) for slow stimuli, peaking 100 ms after the onset of motion. Subsequently, activity in the mid-temporal motion-sensitive areas, MT+, peaked at 120 ms, followed by peaks in activity in the more dorsal area, V3A, at 160 ms and the lateral occipital complex at 180 ms. Approximately 250 ms after stimulus onset, activity fast motion stimuli was predominant in area V6 along the parieto-occipital sulcus. Finally, at 350 ms (100 ms after the motion offset) brain activity was visible again in area V1. For fast motion stimuli, the spatio-temporal brain pattern was similar, except that the first activity was detected at 70 ms in area MT+. Comparing functional magnetic resonance data for slow vs. fast motion, we found signs of slow-fast motion stimulus topography along the posterior brain in at least three cortical regions (MT+, V3A and LOR). PMID:22558222
Full-wave and half-wave rectification in second-order motion perception
NASA Technical Reports Server (NTRS)
Solomon, J. A.; Sperling, G.
1994-01-01
Microbalanced stimuli are dynamic displays which do not stimulate motion mechanisms that apply standard (Fourier-energy or autocorrelational) motion analysis directly to the visual signal. In order to extract motion information from microbalanced stimuli, Chubb and Sperling [(1988) Journal of the Optical Society of America, 5, 1986-2006] proposed that the human visual system performs a rectifying transformation on the visual signal prior to standard motion analysis. The current research employs two novel types of microbalanced stimuli: half-wave stimuli preserve motion information following half-wave rectification (with a threshold) but lose motion information following full-wave rectification; full-wave stimuli preserve motion information following full-wave rectification but lose motion information following half-wave rectification. Additionally, Fourier stimuli, ordinary square-wave gratings, were used to stimulate standard motion mechanisms. Psychometric functions (direction discrimination vs stimulus contrast) were obtained for each type of stimulus when presented alone, and when masked by each of the other stimuli (presented as moving masks and also as nonmoving, counterphase-flickering masks). RESULTS: given sufficient contrast, all three types of stimulus convey motion. However, only one-third of the population can perceive the motion of the half-wave stimulus. Observers are able to process the motion information contained in the Fourier stimulus slightly more efficiently than the information in the full-wave stimulus but are much less efficient in processing half-wave motion information. Moving masks are more effective than counterphase masks at hampering direction discrimination, indicating that some of the masking effect is interference between motion mechanisms, and some occurs at earlier stages. When either full-wave and Fourier or half-wave and Fourier gratings are presented simultaneously, there is a wide range of relative contrasts within which the motion directions of both gratings are easily determinable. Conversely, when half-wave and full-wave gratings are combined, the direction of only one of these gratings can be determined with high accuracy. CONCLUSIONS: the results indicate that three motion computations are carried out, any two in parallel: one standard ("first order") and two non-Fourier ("second-order") computations that employ full-wave and half-wave rectification.
Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor
2015-01-01
Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650
Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.
Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi
2017-07-01
Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.
Raymond, J L; Lisberger, S G
1996-12-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
NASA Technical Reports Server (NTRS)
Raymond, J. L.; Lisberger, S. G.
1996-01-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi
2018-06-05
Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila
Aptekar, Jacob W.; Keleş, Mehmet F.; Lu, Patrick M.; Zolotova, Nadezhda M.
2015-01-01
Many animals rely on visual figure–ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure–ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula—one of the four, primary neuropiles of the fly optic lobe—performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure–ground stimuli in a homologous manner to the behavior; “figure-like” stimuli are coded similar to one another and “ground-like” stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. PMID:25972183
Neurons forming optic glomeruli compute figure-ground discriminations in Drosophila.
Aptekar, Jacob W; Keleş, Mehmet F; Lu, Patrick M; Zolotova, Nadezhda M; Frye, Mark A
2015-05-13
Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. Copyright © 2015 the authors 0270-6474/15/357587-13$15.00/0.
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Compatibility of Motion Facilitates Visuomotor Synchronization
ERIC Educational Resources Information Center
Hove, Michael J.; Spivey, Michael J.; Krumhansl, Carol L.
2010-01-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1,…
Functional specialization and generalization for grouping of stimuli based on colour and motion
Zeki, Semir; Stutters, Jonathan
2013-01-01
This study was undertaken to learn whether the principle of functional specialization that is evident at the level of the prestriate visual cortex extends to areas that are involved in grouping visual stimuli according to attribute, and specifically according to colour and motion. Subjects viewed, in an fMRI scanner, visual stimuli composed of moving dots, which could be either coloured or achromatic; in some stimuli the moving coloured dots were randomly distributed or moved in random directions; in others, some of the moving dots were grouped together according to colour or to direction of motion, with the number of groupings varying from 1 to 3. Increased activation was observed in area V4 in response to colour grouping and in V5 in response to motion grouping while both groupings led to activity in separate though contiguous compartments within the intraparietal cortex. The activity in all the above areas was parametrically related to the number of groupings, as was the prominent activity in Crus I of the cerebellum where the activity resulting from the two types of grouping overlapped. This suggests (a) that, the specialized visual areas of the prestriate cortex have functions beyond the processing of visual signals according to attribute, namely that of grouping signals according to colour (V4) or motion (V5); (b) that the functional separation evident in visual cortical areas devoted to motion and colour, respectively, is maintained at the level of parietal cortex, at least as far as grouping according to attribute is concerned; and (c) that, by contrast, this grouping-related functional segregation is not maintained at the level of the cerebellum. PMID:23415950
Zhang, Yi; Chen, Lihan
2016-01-01
Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Multisensory Motion Perception in 3–4 Month-Old Infants
Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara
2017-01-01
Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829
Perceptual learning modifies the functional specializations of visual cortical areas.
Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang
2016-05-17
Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Senkowski, Daniel; Saint-Amour, Dave; Kelly, Simon P; Foxe, John J
2007-07-01
In everyday life, we continuously and effortlessly integrate the multiple sensory inputs from objects in motion. For instance, the sound and the visual percept of vehicles in traffic provide us with complementary information about the location and motion of vehicles. Here, we used high-density electrical mapping and local auto-regressive average (LAURA) source estimation to study the integration of multisensory objects in motion as reflected in event-related potentials (ERPs). A randomized stream of naturalistic multisensory-audiovisual (AV), unisensory-auditory (A), and unisensory-visual (V) "splash" clips (i.e., a drop falling and hitting a water surface) was presented among non-naturalistic abstract motion stimuli. The visual clip onset preceded the "splash" onset by 100 ms for multisensory stimuli. For naturalistic objects early multisensory integration effects beginning 120-140 ms after sound onset were observed over posterior scalp, with distributed sources localized to occipital cortex, temporal lobule, insular, and medial frontal gyrus (MFG). These effects, together with longer latency interactions (210-250 and 300-350 ms) found in a widespread network of occipital, temporal, and frontal areas, suggest that naturalistic objects in motion are processed at multiple stages of multisensory integration. The pattern of integration effects differed considerably for non-naturalistic stimuli. Unlike naturalistic objects, no early interactions were found for non-naturalistic objects. The earliest integration effects for non-naturalistic stimuli were observed 210-250 ms after sound onset including large portions of the inferior parietal cortex (IPC). As such, there were clear differences in the cortical networks activated by multisensory motion stimuli as a consequence of the semantic relatedness (or lack thereof) of the constituent sensory elements.
Neural mechanism for sensing fast motion in dim light.
Li, Ran; Wang, Yi
2013-11-07
Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.
Visual processing in the central bee brain.
Paulk, Angelique C; Dacks, Andrew M; Phillips-Portillo, James; Fellous, Jean-Marc; Gronenberg, Wulfila
2009-08-12
Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur.
Perceived shifts of flashed stimuli by visible and invisible object motion.
Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke
2003-01-01
Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
Separating neural activity associated with emotion and implied motion: An fMRI study.
Kolesar, Tiffany A; Kornelsen, Jennifer; Smith, Stephen D
2017-02-01
Previous research provides evidence for an emo-motoric neural network allowing emotion to modulate activity in regions of the nervous system related to movement. However, recent research suggests that these results may be due to the movement depicted in the stimuli. The purpose of the current study was to differentiate the unique neural activity of emotion and implied motion using functional MRI. Thirteen healthy participants viewed 4 sets of images: (a) negative stimuli implying movement, (b) negative stimuli not implying movement, (c) neutral stimuli implying movement, and (d) neutral stimuli not implying movement. A main effect for implied motion was found, primarily in regions associated with multimodal integration (bilateral insula and cingulate), and visual areas that process motion (bilateral middle temporal gyrus). A main effect for emotion was found primarily in occipital and parietal regions, indicating that emotion enhances visual perception. Surprisingly, emotion also activated the left precentral gyrus, a motor region. These results demonstrate that emotion elicits activity above and beyond that evoked by the perception of implied movement, but that the neural representations of these characteristics overlap. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Krug, Kristine; Cicmil, Nela; Parker, Andrew J.; Cumming, Bruce G.
2013-01-01
Summary Judgments about the perceptual appearance of visual objects require the combination of multiple parameters, like location, direction, color, speed, and depth. Our understanding of perceptual judgments has been greatly informed by studies of ambiguous figures, which take on different appearances depending upon the brain state of the observer. Here we probe the neural mechanisms hypothesized as responsible for judging the apparent direction of rotation of ambiguous structure from motion (SFM) stimuli. Resolving the rotation direction of SFM cylinders requires the conjoint decoding of direction of motion and binocular depth signals [1, 2]. Within cortical visual area V5/MT of two macaque monkeys, we applied electrical stimulation at sites with consistent multiunit tuning to combinations of binocular depth and direction of motion, while the monkey made perceptual decisions about the rotation of SFM stimuli. For both ambiguous and unambiguous SFM figures, rotation judgments shifted as if we had added a specific conjunction of disparity and motion signals to the stimulus elements. This is the first causal demonstration that the activity of neurons in V5/MT contributes directly to the perception of SFM stimuli and by implication to decoding the specific conjunction of disparity and motion, the two different visual cues whose combination drives the perceptual judgment. PMID:23871244
Black–white asymmetry in visual perception
Lu, Zhong-Lin; Sperling, George
2012-01-01
With eleven different types of stimuli that exercise a wide gamut of spatial and temporal visual processes, negative perturbations from mean luminance are found to be typically 25% more effective visually than positive perturbations of the same magnitude (range 8–67%). In Experiment 12, the magnitude of the black–white asymmetry is shown to be a saturating function of stimulus contrast. Experiment 13 shows black–white asymmetry primarily involves a nonlinearity in the visual representation of decrements. Black–white asymmetry in early visual processing produces even-harmonic distortion frequencies in all ordinary stimuli and in illusions such as the perceived asymmetry of optically perfect sine wave gratings. In stimuli intended to stimulate exclusively second-order processing in which motion or shape are defined not by luminance differences but by differences in texture contrast, the black–white asymmetry typically generates artifactual luminance (first-order) motion and shape components. Because black–white asymmetry pervades psychophysical and neurophysiological procedures that utilize spatial or temporal variations of luminance, it frequently needs to be considered in the design and evaluation of experiments that involve visual stimuli. Simple procedures to compensate for black–white asymmetry are proposed. PMID:22984221
Xiao, Jianbo; Niu, Yu-Qiong; Wiesner, Steven
2014-01-01
Multiple visual stimuli are common in natural scenes, yet it remains unclear how multiple stimuli interact to influence neuronal responses. We investigated this question by manipulating relative signal strengths of two stimuli moving simultaneously within the receptive fields (RFs) of neurons in the extrastriate middle temporal (MT) cortex. Visual stimuli were overlapping random-dot patterns moving in two directions separated by 90°. We first varied the motion coherence of each random-dot pattern and characterized, across the direction tuning curve, the relationship between neuronal responses elicited by bidirectional stimuli and by the constituent motion components. The tuning curve for bidirectional stimuli showed response normalization and can be accounted for by a weighted sum of the responses to the motion components. Allowing nonlinear, multiplicative interaction between the two component responses significantly improved the data fit for some neurons, and the interaction mainly had a suppressive effect on the neuronal response. The weighting of the component responses was not fixed but dependent on relative signal strengths. When two stimulus components moved at different coherence levels, the response weight for the higher-coherence component was significantly greater than that for the lower-coherence component. We also varied relative luminance levels of two coherently moving stimuli and found that MT response weight for the higher-luminance component was also greater. These results suggest that competition between multiple stimuli within a neuron's RF depends on relative signal strengths of the stimuli and that multiplicative nonlinearity may play an important role in shaping the response tuning for multiple stimuli. PMID:24899674
Contextual effects on motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2008-08-15
Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.
Woo, Kevin L; Rieucau, Guillaume; Burke, Darren
2017-02-01
Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Bias to experience approaching motion in a three-dimensional virtual environment.
Lewis, Clifford F; McBeath, Michael K
2004-01-01
We used two-frame apparent motion in a three-dimensional virtual environment to test whether observers had biases to experience approaching or receding motion in depth. Observers viewed a tunnel of tiles receding in depth, that moved ambiguously either toward or away from them. We found that observers exhibited biases to experience approaching motion. The strengths of the biases were decreased when stimuli pointed away, but size of the display screen had no effect. Tests with diamond-shaped tiles that varied in the degree of pointing asymmetry resulted in a linear trend in which the bias was strongest for stimuli pointing toward the viewer, and weakest for stimuli pointing away. We show that the overall bias to experience approaching motion is consistent with a computational strategy of matching corresponding features between adjacent foreshortened stimuli in consecutive visual frames. We conclude that there are both adaptational and geometric reasons to favor the experience of approaching motion.
Compatibility of motion facilitates visuomotor synchronization.
Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L
2010-12-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Binocular eye movement control and motion perception: what is being tracked?
van der Steen, Johannes; Dits, Joyce
2012-10-19
We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.
Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
van der Steen, Johannes; Dits, Joyce
2012-01-01
Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Sequential sensory and decision processing in posterior parietal cortex
Ibos, Guilhem; Freedman, David J
2017-01-01
Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for). DOI: http://dx.doi.org/10.7554/eLife.23743.001 PMID:28418332
Novel method of extracting motion from natural movies.
Suzuki, Wataru; Ichinohe, Noritaka; Tani, Toshiki; Hayami, Taku; Miyakawa, Naohisa; Watanabe, Satoshi; Takeichi, Hiroshige
2017-11-01
The visual system in primates can be segregated into motion and shape pathways. Interaction occurs at multiple stages along these pathways. Processing of shape-from-motion and biological motion is considered to be a higher-order integration process involving motion and shape information. However, relatively limited types of stimuli have been used in previous studies on these integration processes. We propose a new algorithm to extract object motion information from natural movies and to move random dots in accordance with the information. The object motion information is extracted by estimating the dynamics of local normal vectors of the image intensity projected onto the x-y plane of the movie. An electrophysiological experiment on two adult common marmoset monkeys (Callithrix jacchus) showed that the natural and random dot movies generated with this new algorithm yielded comparable neural responses in the middle temporal visual area. In principle, this algorithm provided random dot motion stimuli containing shape information for arbitrary natural movies. This new method is expected to expand the neurophysiological and psychophysical experimental protocols to elucidate the integration processing of motion and shape information in biological systems. The novel algorithm proposed here was effective in extracting object motion information from natural movies and provided new motion stimuli to investigate higher-order motion information processing. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Krug, Kristine; Cicmil, Nela; Parker, Andrew J; Cumming, Bruce G
2013-08-05
Judgments about the perceptual appearance of visual objects require the combination of multiple parameters, like location, direction, color, speed, and depth. Our understanding of perceptual judgments has been greatly informed by studies of ambiguous figures, which take on different appearances depending upon the brain state of the observer. Here we probe the neural mechanisms hypothesized as responsible for judging the apparent direction of rotation of ambiguous structure from motion (SFM) stimuli. Resolving the rotation direction of SFM cylinders requires the conjoint decoding of direction of motion and binocular depth signals [1, 2]. Within cortical visual area V5/MT of two macaque monkeys, we applied electrical stimulation at sites with consistent multiunit tuning to combinations of binocular depth and direction of motion, while the monkey made perceptual decisions about the rotation of SFM stimuli. For both ambiguous and unambiguous SFM figures, rotation judgments shifted as if we had added a specific conjunction of disparity and motion signals to the stimulus elements. This is the first causal demonstration that the activity of neurons in V5/MT contributes directly to the perception of SFM stimuli and by implication to decoding the specific conjunction of disparity and motion, the two different visual cues whose combination drives the perceptual judgment. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
What visual information is used for stereoscopic depth displacement discrimination?
Nefs, Harold T; Harris, Julie M
2010-01-01
There are two ways to detect a displacement in stereoscopic depth, namely by monitoring the change in disparity over time (CDOT) or by monitoring the interocular velocity difference (IOVD). Though previous studies have attempted to understand which cue is most significant for the visual system, none has designed stimuli that provide a comparison in terms of relative efficiency between them. Here we used two-frame motion and random-dot noise to deliver equivalent strengths of CDOT and IOVD information to the visual system. Using three kinds of random-dot stimuli, we were able to isolate CDOT or IOVD or deliver both simultaneously. The proportion of dots delivering CDOT or IOVD signals could be varied, and we defined the discrimination threshold as the proportion needed to detect the direction of displacement (towards or away). Thresholds were similar for stimuli containing CDOT only, and containing both CDOT and IOVD, but only one participant was able to consistently perceive the displacement for stimuli containing only IOVD. We also investigated the effect of disparity pedestals on discrimination. Performance was best when the displacement crossed the reference plane, but was not significantly different for stimuli containing CDOT only and those containing both CDOT and IOVD. When stimuli are specifically designed to provide equivalent two-frame motion or disparity-change, few participants can reliably detect displacement when IOVD is the only cue. This challenges the notion that IOVD is involved in the discrimination of direction of displacement in two-frame motion displays.
1st- and 2nd-order motion and texture resolution in central and peripheral vision
NASA Technical Reports Server (NTRS)
Solomon, J. A.; Sperling, G.
1995-01-01
STIMULI. The 1st-order stimuli are moving sine gratings. The 2nd-order stimuli are fields of static visual texture, whose contrasts are modulated by moving sine gratings. Neither the spatial slant (orientation) nor the direction of motion of these 2nd-order (microbalanced) stimuli can be detected by a Fourier analysis; they are invisible to Reichardt and motion-energy detectors. METHOD. For these dynamic stimuli, when presented both centrally and in an annular window extending from 8 to 10 deg in eccentricity, we measured the highest spatial frequency for which discrimination between +/- 45 deg texture slants and discrimination between opposite directions of motion were each possible. RESULTS. For sufficiently low spatial frequencies, slant and direction can be discriminated in both central and peripheral vision, for both 1st- and for 2nd-order stimuli. For both 1st- and 2nd-order stimuli, at both retinal locations, slant discrimination is possible at higher spatial frequencies than direction discrimination. For both 1st- and 2nd-order stimuli, motion resolution decreases 2-3 times more rapidly with eccentricity than does texture resolution. CONCLUSIONS. (1) 1st- and 2nd-order motion scale similarly with eccentricity. (2) 1st- and 2nd-order texture scale similarly with eccentricity. (3) The central/peripheral resolution fall-off is 2-3 times greater for motion than for texture.
Visual motion integration for perception and pursuit
NASA Technical Reports Server (NTRS)
Stone, L. S.; Beutter, B. R.; Lorenceau, J.
2000-01-01
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.
Rieucau, Guillaume; Burke, Darren
2017-01-01
Abstract Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus, a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards’ ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species. PMID:29491965
A Pursuit Theory Account for the Perception of Common Motion in Motion Parallax.
Ratzlaff, Michael; Nawrot, Mark
2016-09-01
The visual system uses an extraretinal pursuit eye movement signal to disambiguate the perception of depth from motion parallax. Visual motion in the same direction as the pursuit is perceived nearer in depth while visual motion in the opposite direction as pursuit is perceived farther in depth. This explanation of depth sign applies to either an allocentric frame of reference centered on the fixation point or an egocentric frame of reference centered on the observer. A related problem is that of depth order when two stimuli have a common direction of motion. The first psychophysical study determined whether perception of egocentric depth order is adequately explained by a model employing an allocentric framework, especially when the motion parallax stimuli have common rather than divergent motion. A second study determined whether a reversal in perceived depth order, produced by a reduction in pursuit velocity, is also explained by this model employing this allocentric framework. The results show than an allocentric model can explain both the egocentric perception of depth order with common motion and the perceptual depth order reversal created by a reduction in pursuit velocity. We conclude that an egocentric model is not the only explanation for perceived depth order in these common motion conditions. © The Author(s) 2016.
3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.
Beveridge, R; Wilson, S; Coyle, D
2016-01-01
A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.
Moving in a Moving World: A Review on Vestibular Motion Sickness
Bertolini, Giovanni; Straumann, Dominik
2016-01-01
Motion sickness is a common disturbance occurring in healthy people as a physiological response to exposure to motion stimuli that are unexpected on the basis of previous experience. The motion can be either real, and therefore perceived by the vestibular system, or illusory, as in the case of visual illusion. A multitude of studies has been performed in the last decades, substantiating different nauseogenic stimuli, studying their specific characteristics, proposing unifying theories, and testing possible countermeasures. Several reviews focused on one of these aspects; however, the link between specific nauseogenic stimuli and the unifying theories and models is often not clearly detailed. Readers unfamiliar with the topic, but studying a condition that may involve motion sickness, can therefore have difficulties to understand why a specific stimulus will induce motion sickness. So far, this general audience struggles to take advantage of the solid basis provided by existing theories and models. This review focuses on vestibular-only motion sickness, listing the relevant motion stimuli, clarifying the sensory signals involved, and framing them in the context of the current theories. PMID:26913019
Critical role of foreground stimuli in perceiving visually induced self-motion (vection).
Nakamura, S; Shimojo, S
1999-01-01
The effects of a foreground stimulus on vection (illusory perception of self-motion induced by a moving background stimulus) were examined in two experiments. The experiments reveal that the presentation of a foreground pattern with a moving background stimulus may affect vection. The foreground stimulus facilitated vection strength when it remained stationary or moved slowly in the opposite direction to that of the background stimulus. On the other hand, there was a strong inhibition of vection when the foreground stimulus moved slowly with, or quickly against, the background. These results suggest that foreground stimuli, as well as background stimuli, play an important role in perceiving self-motion.
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
The sensory components of high-capacity iconic memory and visual working memory.
Bradley, Claire; Pearson, Joel
2012-01-01
EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Modeling depth from motion parallax with the motion/pursuit ratio
Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith
2014-01-01
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926
NASA Technical Reports Server (NTRS)
Bigler, W. B., II
1977-01-01
The NASA passenger ride quality apparatus (PRQA), a ground based motion simulator, was compared to the total in flight simulator (TIFS). Tests were made on PRQA with varying stimuli: motions only; motions and noise; motions, noise, and visual; and motions and visual. Regression equations for the tests were obtained and subsequent t-testing of the slopes indicated that ground based simulator tests produced comfort change rates similar to actual flight data. It was recommended that PRQA be used in the ride quality program for aircraft and that it be validated for other transportation modes.
A neural basis for the spatial suppression of visual motion perception
Liu, Liu D; Haefner, Ralf M; Pack, Christopher C
2016-01-01
In theory, sensory perception should be more accurate when more neurons contribute to the representation of a stimulus. However, psychophysical experiments that use larger stimuli to activate larger pools of neurons sometimes report impoverished perceptual performance. To determine the neural mechanisms underlying these paradoxical findings, we trained monkeys to discriminate the direction of motion of visual stimuli that varied in size across trials, while simultaneously recording from populations of motion-sensitive neurons in cortical area MT. We used the resulting data to constrain a computational model that explained the behavioral data as an interaction of three main mechanisms: noise correlations, which prevented stimulus information from growing with stimulus size; neural surround suppression, which decreased sensitivity for large stimuli; and a read-out strategy that emphasized neurons with receptive fields near the stimulus center. These results suggest that paradoxical percepts reflect tradeoffs between sensitivity and noise in neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.16167.001 PMID:27228283
Decoding complex flow-field patterns in visual working memory.
Christophel, Thomas B; Haynes, John-Dylan
2014-05-01
There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation
NASA Technical Reports Server (NTRS)
O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.
2006-01-01
The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.
Zebrafish response to a robotic replica in three dimensions
Ruberto, Tommaso; Mwaffo, Violet; Singh, Sukhgewanpreet; Neri, Daniele
2016-01-01
As zebrafish emerge as a species of choice for the investigation of biological processes, a number of experimental protocols are being developed to study their social behaviour. While live stimuli may elicit varying response in focal subjects owing to idiosyncrasies, tiredness and circadian rhythms, video stimuli suffer from the absence of physical input and rely only on two-dimensional projections. Robotics has been recently proposed as an alternative approach to generate physical, customizable, effective and consistent stimuli for behavioural phenotyping. Here, we contribute to this field of investigation through a novel four-degree-of-freedom robotics-based platform to manoeuvre a biologically inspired three-dimensionally printed replica. The platform enables three-dimensional motions as well as body oscillations to mimic zebrafish locomotion. In a series of experiments, we demonstrate the differential role of the visual stimuli associated with the biologically inspired replica and its three-dimensional motion. Three-dimensional tracking and information-theoretic tools are complemented to quantify the interaction between zebrafish and the robotic stimulus. Live subjects displayed a robust attraction towards the moving replica, and such attraction was lost when controlling for its visual appearance or motion. This effort is expected to aid zebrafish behavioural phenotyping, by offering a novel approach to generate physical stimuli moving in three dimensions. PMID:27853566
Postural Instability Induced by Visual Motion Stimuli in Patients With Vestibular Migraine
Lim, Yong-Hyun; Kim, Ji-Soo; Lee, Ho-Won; Kim, Sung-Hee
2018-01-01
Patients with vestibular migraine are susceptible to motion sickness. This study aimed to determine whether the severity of posture instability is related to the susceptibility to motion sickness. We used a visual motion paradigm with two conditions of the stimulated retinal field and the head posture to quantify postural stability while maintaining a static stance in 18 patients with vestibular migraine and in 13 age-matched healthy subjects. Three parameters of postural stability showed differences between VM patients and controls: RMS velocity (0.34 ± 0.02 cm/s vs. 0.28 ± 0.02 cm/s), RMS acceleration (8.94 ± 0.74 cm/s2 vs. 6.69 ± 0.87 cm/s2), and sway area (1.77 ± 0.22 cm2 vs. 1.04 ± 0.25 cm2). Patients with vestibular migraine showed marked postural instability of the head and neck when visual stimuli were presented in the retinal periphery. The pseudo-Coriolis effect induced by head roll tilt was not responsible for the main differences in postural instability between patients and controls. Patients with vestibular migraine showed a higher visual dependency and low stability of the postural control system when maintaining quiet standing, which may be related to susceptibility to motion sickness. PMID:29930534
Postural Instability Induced by Visual Motion Stimuli in Patients With Vestibular Migraine.
Lim, Yong-Hyun; Kim, Ji-Soo; Lee, Ho-Won; Kim, Sung-Hee
2018-01-01
Patients with vestibular migraine are susceptible to motion sickness. This study aimed to determine whether the severity of posture instability is related to the susceptibility to motion sickness. We used a visual motion paradigm with two conditions of the stimulated retinal field and the head posture to quantify postural stability while maintaining a static stance in 18 patients with vestibular migraine and in 13 age-matched healthy subjects. Three parameters of postural stability showed differences between VM patients and controls: RMS velocity (0.34 ± 0.02 cm/s vs. 0.28 ± 0.02 cm/s), RMS acceleration (8.94 ± 0.74 cm/s 2 vs. 6.69 ± 0.87 cm/s 2 ), and sway area (1.77 ± 0.22 cm 2 vs. 1.04 ± 0.25 cm 2 ). Patients with vestibular migraine showed marked postural instability of the head and neck when visual stimuli were presented in the retinal periphery. The pseudo-Coriolis effect induced by head roll tilt was not responsible for the main differences in postural instability between patients and controls. Patients with vestibular migraine showed a higher visual dependency and low stability of the postural control system when maintaining quiet standing, which may be related to susceptibility to motion sickness.
Seno, Takeharu; Fukuda, Haruaki
2012-01-01
Over the last 100 years, numerous studies have examined the effective visual stimulus properties for inducing illusory self-motion (known as vection). This vection is often experienced more strongly in daily life than under controlled experimental conditions. One well-known example of vection in real life is the so-called 'train illusion'. In the present study, we showed that this train illusion can also be generated in the laboratory using virtual computer graphics-based motion stimuli. We also demonstrated that this vection can be modified by altering the meaning of the visual stimuli (i.e., top down effects). Importantly, we show that the semantic meaning of a stimulus can inhibit or facilitate vection, even when there is no physical change to the stimulus.
Nakamura, S; Shimojo, S
1998-10-01
The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.
Visual and vestibular components of motion sickness.
Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D
1996-10-01
The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
Lee, Hannah; Kim, Jejoong
2017-06-01
It has been reported that visual perception can be influenced not only by the physical features of a stimulus but also by the emotional valence of the stimulus, even without explicit emotion recognition. Some previous studies reported an anger superiority effect while others found a happiness superiority effect during visual perception. It thus remains unclear as to which emotion is more influential. In the present study, we conducted two experiments using biological motion (BM) stimuli to examine whether emotional valence of the stimuli would affect BM perception; and if so, whether a specific type of emotion is associated with a superiority effect. Point-light walkers with three emotion types (anger, happiness, and neutral) were used, and the threshold to detect BM within noise was measured in Experiment 1. Participants showed higher performance in detecting happy walkers compared with the angry and neutral walkers. Follow-up motion velocity analysis revealed that physical difference among the stimuli was not the main factor causing the effect. The results of the emotion recognition task in Experiment 2 also showed a happiness superiority effect, as in Experiment 1. These results show that emotional valence (happiness) of the stimuli can facilitate the processing of BM.
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
Wall, Michael; Woodward, Kimberly R; Doyle, Carrie K; Artes, Paul H
2009-02-01
Standard automated perimetry (SAP) shows a marked increase in variability in damaged areas of the visual field. This study was conducted to test the hypothesis that larger stimuli are associated with more uniform variability, by investigating the retest variability of four perimetry tests: standard automated perimetry size III (SAP III), with the SITA standard strategy; SAP size V (SAP V), with the full-threshold strategy; Matrix (FDT II), and Motion perimetry. One eye each of 120 patients with glaucoma was examined on the same day with these four perimetric tests and retested 1 to 8 weeks later. The decibel scales were adjusted to make the test's scales numerically similar. Retest variability was examined by establishing the distributions of retest threshold estimates, for each threshold level observed at the first test. The 5th and 95th percentiles of the retest distribution were used as point-wise limits of retest variability. Regression analyses were performed to quantify the relationship between visual field sensitivity and variability. With SAP III, the retest variability increased substantially with reducing sensitivity. Corresponding increases with SAP V, Matrix, and Motion perimetry were considerably smaller or absent. With SAP III, sensitivity explained 22% of the retest variability (r(2)), whereas corresponding data for SAP V, Matrix, and Motion perimetry were 12%, 2%, and 2%, respectively. Variability of Matrix and Motion perimetry does not increase as substantially as that of SAP III in damaged areas of the visual field. Increased sampling with the larger stimuli of these techniques is the likely explanation for this finding. These properties may make these stimuli excellent candidates for early detection of visual field progression.
The Sensory Components of High-Capacity Iconic Memory and Visual Working Memory
Bradley, Claire; Pearson, Joel
2012-01-01
Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more “high-level” alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful “lower-capacity” visual working memory. PMID:23055993
Visual motion detection and habitat preference in Anolis lizards.
Steinberg, David S; Leal, Manuel
2016-11-01
The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.
Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.
Souto, David; Kerzel, Dirk
2013-02-06
Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.
Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.
Paulk, Angelique C; Gronenberg, Wulfila
2008-11-01
To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.
Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.
2014-01-01
An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974
Effects of spatial cues on color-change detection in humans
Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.
2015-01-01
Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359
Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.
Chang, Acer Y C; Kanai, Ryota; Seth, Anil K
2015-01-01
Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.
Emotional and movement-related body postures modulate visual processing
Borhani, Khatereh; Làdavas, Elisabetta; Maier, Martin E.; Avenanti, Alessio
2015-01-01
Human body postures convey useful information for understanding others’ emotions and intentions. To investigate at which stage of visual processing emotional and movement-related information conveyed by bodies is discriminated, we examined event-related potentials elicited by laterally presented images of bodies with static postures and implied-motion body images with neutral, fearful or happy expressions. At the early stage of visual structural encoding (N190), we found a difference in the sensitivity of the two hemispheres to observed body postures. Specifically, the right hemisphere showed a N190 modulation both for the motion content (i.e. all the observed postures implying body movements elicited greater N190 amplitudes compared with static postures) and for the emotional content (i.e. fearful postures elicited the largest N190 amplitude), while the left hemisphere showed a modulation only for the motion content. In contrast, at a later stage of perceptual representation, reflecting selective attention to salient stimuli, an increased early posterior negativity was observed for fearful stimuli in both hemispheres, suggesting an enhanced processing of motivationally relevant stimuli. The observed modulations, both at the early stage of structural encoding and at the later processing stage, suggest the existence of a specialized perceptual mechanism tuned to emotion- and action-related information conveyed by human body postures. PMID:25556213
Default perception of high-speed motion
Wexler, Mark; Glennerster, Andrew; Cavanagh, Patrick; Ito, Hiroyuki; Seno, Takeharu
2013-01-01
When human observers are exposed to even slight motion signals followed by brief visual transients—stimuli containing no detectable coherent motion signals—they perceive large and salient illusory jumps. This visually striking effect, which we call “high phi,” challenges well-entrenched assumptions about the perception of motion, namely the minimal-motion principle and the breakdown of coherent motion perception with steps above an upper limit called dmax. Our experiments with transients, such as texture randomization or contrast reversal, show that the magnitude of the jump depends on spatial frequency and transient duration—but not on the speed of the inducing motion signals—and the direction of the jump depends on the duration of the inducer. Jump magnitude is robust across jump directions and different types of transient. In addition, when a texture is actually displaced by a large step beyond the upper step size limit of dmax, a breakdown of coherent motion perception is expected; however, in the presence of an inducer, observers again perceive coherent displacements at or just above dmax. In summary, across a large variety of stimuli, we find that when incoherent motion noise is preceded by a small bias, instead of perceiving little or no motion—as suggested by the minimal-motion principle—observers perceive jumps whose amplitude closely follows their own dmax limits. PMID:23572578
Coordinates of Human Visual and Inertial Heading Perception.
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Coordinates of Human Visual and Inertial Heading Perception
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865
Lahnakoski, Juha M; Salmi, Juha; Jääskeläinen, Iiro P; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments.
Lahnakoski, Juha M.; Salmi, Juha; Jääskeläinen, Iiro P.; Lampinen, Jouko; Glerean, Enrico; Tikka, Pia; Sams, Mikko
2012-01-01
Understanding how the brain processes stimuli in a rich natural environment is a fundamental goal of neuroscience. Here, we showed a feature film to 10 healthy volunteers during functional magnetic resonance imaging (fMRI) of hemodynamic brain activity. We then annotated auditory and visual features of the motion picture to inform analysis of the hemodynamic data. The annotations were fitted to both voxel-wise data and brain network time courses extracted by independent component analysis (ICA). Auditory annotations correlated with two independent components (IC) disclosing two functional networks, one responding to variety of auditory stimulation and another responding preferentially to speech but parts of the network also responding to non-verbal communication. Visual feature annotations correlated with four ICs delineating visual areas according to their sensitivity to different visual stimulus features. In comparison, a separate voxel-wise general linear model based analysis disclosed brain areas preferentially responding to sound energy, speech, music, visual contrast edges, body motion and hand motion which largely overlapped the results revealed by ICA. Differences between the results of IC- and voxel-based analyses demonstrate that thorough analysis of voxel time courses is important for understanding the activity of specific sub-areas of the functional networks, while ICA is a valuable tool for revealing novel information about functional connectivity which need not be explained by the predefined model. Our results encourage the use of naturalistic stimuli and tasks in cognitive neuroimaging to study how the brain processes stimuli in rich natural environments. PMID:22496909
Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study
Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.
2012-01-01
Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014
Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.
Nava, Elena; Grassi, Massimo; Turati, Chiara
2016-01-01
Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
Spatial Attention and Audiovisual Interactions in Apparent Motion
ERIC Educational Resources Information Center
Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles
2007-01-01
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…
Residual attention guidance in blindsight monkeys watching complex natural scenes.
Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi
2012-08-07
Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
Kinesthetic information disambiguates visual motion signals.
Hu, Bo; Knill, David C
2010-05-25
Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Characterizing the effects of feature salience and top-down attention in the early visual system.
Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank
2017-07-01
The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. We used high-resolution functional MRI (fMRI) at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or nonsalient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, whereas the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas. NEW & NOTEWORTHY While spatial attention allows for specific, goal-driven enhancement of stimuli, salient items outside of the current focus of attention must also be prioritized. We used 7T fMRI to compare salience and spatial attentional enhancement along the early visual hierarchy. We report additive effects of attention and bottom-up salience in early visual areas, suggesting that salience enhancement is not contingent on the observer's attentional state. Copyright © 2017 the American Physiological Society.
Alpha oscillations correlate with the successful inhibition of unattended stimuli.
Händel, Barbara F; Haarmeier, Thomas; Jensen, Ole
2011-09-01
Because the human visual system is continually being bombarded with inputs, it is necessary to have effective mechanisms for filtering out irrelevant information. This is partly achieved by the allocation of attention, allowing the visual system to process relevant input while blocking out irrelevant input. What is the physiological substrate of attentional allocation? It has been proposed that alpha activity reflects functional inhibition. Here we asked if inhibition by alpha oscillations has behavioral consequences for suppressing the perception of unattended input. To this end, we investigated the influence of alpha activity on motion processing in two attentional conditions using magneto-encephalography. The visual stimuli used consisted of two random-dot kinematograms presented simultaneously to the left and right visual hemifields. Subjects were cued to covertly attend the left or right kinematogram. After 1.5 sec, a second cue tested whether subjects could report the direction of coherent motion in the attended (80%) or unattended hemifield (20%). Occipital alpha power was higher contralateral to the unattended side than to the attended side, thus suggesting inhibition of the unattended hemifield. Our key finding is that this alpha lateralization in the 20% invalidly cued trials did correlate with the perception of motion direction: Subjects with pronounced alpha lateralization were worse at detecting motion direction in the unattended hemifield. In contrast, lateralization did not correlate with visual discrimination in the attended visual hemifield. Our findings emphasize the suppressive nature of alpha oscillations and suggest that processing of inputs outside the field of attention is weakened by means of increased alpha activity.
Campbell, Julia; Sharma, Anu
2016-01-01
Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5-7, 8-10, and 11-15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance.
Campbell, Julia; Sharma, Anu
2016-01-01
Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5–7, 8–10, and 11–15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738
Comparative Effects of Seven Verbal-Visual Presentation Modes Upon Learning Tasks.
ERIC Educational Resources Information Center
Russell, Josiah Johnson, IV
A study was made of the comparative media effects upon teaching the component learning tasks of concept learning: classification, generalization, and application. The seven selected methods of presenting stimuli to the learners were: motion pictures with spoken verbal; motion pictures, silent; still pictures with spoken verbal; still pictures,…
Lasers' spectral and temporal profile can affect visual glare disability.
Beer, Jeremy M A; Freeman, David A
2012-12-01
Experiments measured the effects of laser glare on visual orientation and motion perception. Laser stimuli were varied according to spectral composition and temporal presentation as subjects identified targets' tilt (Experiment 1) and movement (Experiment 2). The objective was to determine whether the glare parameters would alter visual disruption. Three spectral profiles (monochromatic Green vs. polychromatic White vs. alternating Red-Green) were used to produce a ring of laser glare surrounding a target. Two experiments were performed to measure the minimum contrast required to report target orientation or motion direction. The temporal glare profile was also varied: the ring was illuminated either continuously or discontinuously. Time-averaged luminance of the glare stimuli was matched across all conditions. In both experiments, threshold (deltaL) values were approximately 0.15 log units higher in monochromatic Green than in polychromatic White conditions. In Experiment 2 (motion identification), thresholds were approximately 0.17 log units higher in rapidly flashing (6, 10, or 14 Hz) than in continuous exposure conditions. Monochromatic extended-source laser glare disrupted orientation and motion identification more than polychromatic glare. In the motion task, pulse trains faster than 6 Hz (but below flicker fusion) elevated thresholds more than continuous glare with the same time-averaged luminance. Under these conditions, alternating the wavelength of monochromatic glare over time did not aggravate disability relative to green-only glare. Repetitively flashing monochromatic laser glare induced occasional episodes of impaired motion identification, perhaps resulting from cognitive interference. Interference speckle might play a role in aggravating monochromatic glare effects.
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
Bosworth, Rain G.; Petrich, Jennifer A.; Dobkins, Karen R.
2012-01-01
In order to investigate differences in the effects of spatial attention between the left visual field (LVF) and the right visual field (RVF), we employed a full/poor attention paradigm using stimuli presented in the LVF vs. RVF. In addition, to investigate differences in the effects of spatial attention between the Dorsal and Ventral processing streams, we obtained motion thresholds (motion coherence thresholds and fine direction discrimination thresholds) and orientation thresholds, respectively. The results of this study showed negligible effects of attention on the orientation task, in either the LVF or RVF. In contrast, for both motion tasks, there was a significant effect of attention in the LVF, but not in the RVF. These data provide psychophysical evidence for greater effects of spatial attention in the LVF/right hemisphere, specifically, for motion processing in the Dorsal stream. PMID:22051893
Rokem, Ariel; Silver, Michael A.
2010-01-01
Summary Learning through experience underlies the ability to adapt to novel tasks and unfamiliar environments. However, learning must be regulated so that relevant aspects of the environment are selectively encoded. Acetylcholine (ACh) has been suggested to regulate learning by enhancing the responses of sensory cortical neurons to behaviorally-relevant stimuli [1]. In this study, we increased synaptic levels of ACh in the brains of healthy human subjects with the cholinesterase inhibitor donepezil (trade name: Aricept) and measured the effects of this cholinergic enhancement on visual perceptual learning. Each subject completed two five-day courses of training on a motion direction discrimination task [2], once while ingesting 5 mg of donepezil before every training session and once while placebo was administered. We found that cholinergic enhancement augmented perceptual learning for stimuli having the same direction of motion and visual field location used during training. In addition, perceptual learning under donepezil was more selective to the trained direction of motion and visual field location. These results, combined with previous studies demonstrating an increase in neuronal selectivity following cholinergic enhancement [3–5], suggest a possible mechanism by which ACh augments neural plasticity by directing activity to populations of neurons that encode behaviorally-relevant stimulus features. PMID:20850321
Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza
2016-05-01
Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Complementary mechanisms create direction selectivity in the fly
Haag, Juergen; Arenz, Alexander; Serbe, Etienne; Gabbiani, Fabrizio; Borst, Alexander
2016-01-01
How neurons become sensitive to the direction of visual motion represents a classic example of neural computation. Two alternative mechanisms have been discussed in the literature so far: preferred direction enhancement, by which responses are amplified when stimuli move along the preferred direction of the cell, and null direction suppression, where one signal inhibits the response to the subsequent one when stimuli move along the opposite, i.e. null direction. Along the processing chain in the Drosophila optic lobe, directional responses first appear in T4 and T5 cells. Visually stimulating sequences of individual columns in the optic lobe with a telescope while recording from single T4 neurons, we find both mechanisms at work implemented in different sub-regions of the receptive field. This finding explains the high degree of directional selectivity found already in the fly’s primary motion-sensing neurons and marks an important step in our understanding of elementary motion detection. DOI: http://dx.doi.org/10.7554/eLife.17421.001 PMID:27502554
NASA Technical Reports Server (NTRS)
Oman, C. M.; Lichtenberg, B. K.; Mccoy, R. K.; Money, K. E.
1986-01-01
Three cases of motion sickness that occurred on Spacelab-1 are described. The relation between head movements and symptom intensity is examined. The effects of visual, tactile, and proprioceptive orientation cues on motion sickness are studied. The effectiveness of the drugs used is evaluated and it is observed that the drugs reduce the frequency of vomiting and overall discomfort. Preflight and postflight motion sickness susceptibility data are presented.
The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-01-01
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987
Contributions of the 12 neuron classes in the fly lamina to motion vision
Tuthill, John C.; Nern, Aljoscha; Holtz, Stephen L.; Rubin, Gerald M.; Reiser, Michael B.
2013-01-01
SUMMARY Motion detection is a fundamental neural computation performed by many sensory systems. In the fly, local motion computation is thought to occur within the first two layers of the visual system, the lamina and medulla. We constructed specific genetic driver lines for each of the 12 neuron classes in the lamina. We then depolarized and hyperpolarized each neuron type, and quantified fly behavioral responses to a diverse set of motion stimuli. We found that only a small number of lamina output neurons are essential for motion detection, while most neurons serve to sculpt and enhance these feedforward pathways. Two classes of feedback neurons (C2 and C3), and lamina output neurons (L2 and L4), are required for normal detection of directional motion stimuli. Our results reveal a prominent role for feedback and lateral interactions in motion processing, and demonstrate that motion-dependent behaviors rely on contributions from nearly all lamina neuron classes. PMID:23849200
Contributions of the 12 neuron classes in the fly lamina to motion vision.
Tuthill, John C; Nern, Aljoscha; Holtz, Stephen L; Rubin, Gerald M; Reiser, Michael B
2013-07-10
Motion detection is a fundamental neural computation performed by many sensory systems. In the fly, local motion computation is thought to occur within the first two layers of the visual system, the lamina and medulla. We constructed specific genetic driver lines for each of the 12 neuron classes in the lamina. We then depolarized and hyperpolarized each neuron type and quantified fly behavioral responses to a diverse set of motion stimuli. We found that only a small number of lamina output neurons are essential for motion detection, while most neurons serve to sculpt and enhance these feedforward pathways. Two classes of feedback neurons (C2 and C3), and lamina output neurons (L2 and L4), are required for normal detection of directional motion stimuli. Our results reveal a prominent role for feedback and lateral interactions in motion processing and demonstrate that motion-dependent behaviors rely on contributions from nearly all lamina neuron classes. Copyright © 2013 Elsevier Inc. All rights reserved.
Residual perception of biological motion in cortical blindness.
Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto
2016-12-01
From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aging effect in pattern, motion and cognitive visual evoked potentials.
Kuba, Miroslav; Kremláček, Jan; Langrová, Jana; Kubová, Zuzana; Szanyi, Jana; Vít, František
2012-06-01
An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs. Copyright © 2012 Elsevier Ltd. All rights reserved.
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
2011-01-01
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
NASA Technical Reports Server (NTRS)
Clark, B.; Stewart, J. D.
1974-01-01
This experiment was concerned with the effects of rotary acceleration on choice reaction time (RTc) to the motion of a luminous line on a cathode-ray tube. Specifically, it compared the (RTc) to rotary acceleration alone, visual acceleration alone, and simultaneous, double stimulation by both rotary and visual acceleration. Thirteen airline pilots were rotated about an earth-vertical axis in a precision rotation device while they observed a vertical line. The stimuli were 7 rotary and visual accelerations which were matched for rise time. The pilot responded as quickly as possible by displacing a vertical controller to the right or left. The results showed a decreasing (RTc) with increasing acceleration for all conditions, while the (RTc) to rotary motion alone was substantially longer than for all other conditions. The (RTc) to the double stimulation was significantly longer than that for visual acceleration alone.
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
Receptive fields for smooth pursuit eye movements and motion perception.
Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R
2010-12-01
Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Zhong-Lin; Sperling, George
2002-10-01
Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
NASA Astrophysics Data System (ADS)
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Visual Categorization of Natural Movies by Rats
Vinken, Kasper; Vermaercke, Ben
2014-01-01
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598
Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D
2012-07-01
We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.
The influence of motion quality on responses towards video playback stimuli.
Ware, Emma; Saunders, Daniel R; Troje, Nikolaus F
2015-05-11
Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR), are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia) response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p) frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour. © 2015. Published by The Company of Biologists Ltd.
Visual event-related potentials to biological motion stimuli in autism spectrum disorders
Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan
2014-01-01
Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.
Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp
2012-07-30
Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.
The responsiveness of biological motion processing areas to selective attention towards goals.
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-10-15
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. Copyright © 2012 Elsevier Inc. All rights reserved.
Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex
Freedman, David J.
2014-01-01
Summary The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventral pathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middle temporal area. Such representations often need to be integrated perceptually to solve tasks which depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctions of color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are task relevant, and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. PMID:25199703
Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C
2018-05-09
Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael
2013-09-01
Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.
Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex
Chen, Aihua; Deangelis, Gregory C.; Angelaki, Dora E.
2011-01-01
Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing. PMID:21849564
A Bayesian model of stereopsis depth and motion direction discrimination.
Read, J C A
2002-02-01
The extraction of stereoscopic depth from retinal disparity, and motion direction from two-frame kinematograms, requires the solution of a correspondence problem. In previous psychophysical work [Read and Eagle (2000) Vision Res 40: 3345-3358], we compared the performance of the human stereopsis and motion systems with correlated and anti-correlated stimuli. We found that, although the two systems performed similarly for narrow-band stimuli, broadband anti-correlated kinematograms produced a strong perception of reversed motion, whereas the stereograms appeared merely rivalrous. I now model these psychophysical data with a computational model of the correspondence problem based on the known properties of visual cortical cells. Noisy retinal images are filtered through a set of Fourier channels tuned to different spatial frequencies and orientations. Within each channel, a Bayesian analysis incorporating a prior preference for small disparities is used to assess the probability of each possible match. Finally, information from the different channels is combined to arrive at a judgement of stimulus disparity. Each model system--stereopsis and motion--has two free parameters: the amount of noise they are subject to, and the strength of their preference for small disparities. By adjusting these parameters independently for each system, qualitative matches are produced to psychophysical data, for both correlated and anti-correlated stimuli, across a range of spatial frequency and orientation bandwidths. The motion model is found to require much higher noise levels and a weaker preference for small disparities. This makes the motion model more tolerant of poor-quality reverse-direction false matches encountered with anti-correlated stimuli, matching the strong perception of reversed motion that humans experience with these stimuli. In contrast, the lower noise level and tighter prior preference used with the stereopsis model means that it performs close to chance with anti-correlated stimuli, in accordance with human psychophysics. Thus, the key features of the experimental data can be reproduced assuming that the motion system experiences more effective noise than the stereoscopy system and imposes a less stringent preference for small disparities.
Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.
Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P
2016-04-01
Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.
How visual timing and form information affect speech and non-speech processing.
Kim, Jeesun; Davis, Chris
2014-10-01
Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.
Yang, Yun; Liu, Sheng; Chowdhury, Syed A.; DeAngelis, Gregory C.; Angelaki, Dora E.
2012-01-01
Many neurons in the dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas of the macaque brain are multisensory, responding to both optic flow and vestibular cues to self-motion. The heading tuning of visual and vestibular responses can be either congruent or opposite, but only congruent cells have been implicated in cue integration for heading perception. Because of the geometric properties of motion parallax, however, both congruent and opposite cells could be involved in coding self-motion when observers fixate a world-fixed target during translation, if congruent cells prefer near disparities and opposite cells prefer far disparities. We characterized the binocular disparity selectivity and heading tuning of MSTd and VIP cells using random-dot stimuli. Most (70%) MSTd neurons were disparity-selective with monotonic tuning, and there was no consistent relationship between depth preference and congruency of visual and vestibular heading tuning. One-third of disparity-selective MSTd cells reversed their depth preference for opposite directions of motion (direction-dependent disparity tuning, DDD), but most of these cells were unisensory with no tuning for vestibular stimuli. Inconsistent with previous reports, the direction preferences of most DDD neurons do not reverse with disparity. By comparison to MSTd, VIP contains fewer disparity-selective neurons (41%) and very few DDD cells. On average, VIP neurons also preferred higher speeds and nearer disparities than MSTd cells. Our findings are inconsistent with the hypothesis that visual/vestibular congruency is linked to depth preference, and also suggest that DDD cells are not involved in multisensory integration for heading perception. PMID:22159105
Defining the computational structure of the motion detector in Drosophila
Clark, Damon A.; Bursztyn, Limor; Horowitz, Mark; Schnitzer, Mark J.; Clandinin, Thomas R.
2011-01-01
SUMMARY Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt Correlator (HRC), relates visual inputs to neural and behavioral responses to motion, but the circuits that implement this computation remain unknown. Using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, “reverse phi”, that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. PMID:21689602
Eyes Matched to the Prize: The State of Matched Filters in Insect Visual Circuits.
Kohn, Jessica R; Heath, Sarah L; Behnia, Rudy
2018-01-01
Confronted with an ever-changing visual landscape, animals must be able to detect relevant stimuli and translate this information into behavioral output. A visual scene contains an abundance of information: to interpret the entirety of it would be uneconomical. To optimally perform this task, neural mechanisms exist to enhance the detection of important features of the sensory environment while simultaneously filtering out irrelevant information. This can be accomplished by using a circuit design that implements specific "matched filters" that are tuned to relevant stimuli. Following this rule, the well-characterized visual systems of insects have evolved to streamline feature extraction on both a structural and functional level. Here, we review examples of specialized visual microcircuits for vital behaviors across insect species, including feature detection, escape, and estimation of self-motion. Additionally, we discuss how these microcircuits are modulated to weigh relevant input with respect to different internal and behavioral states.
Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.
Seymour, Kiley J; Clifford, Colin W G
2012-05-01
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Visual categorization of natural movies by rats.
Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P
2014-08-06
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.
An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity
Shaikh, Danish; Manoonpong, Poramate
2017-01-01
Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking. PMID:28337137
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Schindler, Andreas; Bartels, Andreas
2017-05-01
Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Experience affects the use of ego-motion signals during 3D shape perception.
Jain, Anshul; Backus, Benjamin T
2010-12-29
Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.
NASA Technical Reports Server (NTRS)
Carpenter-Smith, Theodore R.; Futamura, Robert G.; Parker, Donald E.
1995-01-01
The present study focused on the development of a procedure to assess perceived self-motion induced by visual surround motion - vection. Using an apparatus that permitted independent control of visual and inertial stimuli, prone observers were translated along their head x-axis (fore/aft). The observers' task was to report the direction of self-motion during passive forward and backward translations of their bodies coupled with exposure to various visual surround conditions. The proportion of 'forward' responses was used to calculate each observer's point of subjective equality (PSE) for each surround condition. The results showed that the moving visual stimulus produced a significant shift in the PSE when data from the moving surround condition were compared with the stationary surround and no-vision condition. Further, the results indicated that vection increased monotonically with surround velocities between 4 and 40/s. It was concluded that linear vection can be measured in terms of changes in the amplitude of whole-body inertial acceleration required to elicit equivalent numbers of 'forward' and 'backward' self-motion reports.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Pellicano, Elizabeth; Gibson, Lisa; Maybery, Murray; Durkin, Kevin; Badcock, David R
2005-01-01
Frith and Happe (Frith, U., & Happe, F. (1994). Autism: Beyond theory of mind. Cognition, 50, 115-132) argue that individuals with autism exhibit 'weak central coherence': an inability to integrate elements of information into coherent wholes. Some authors have speculated that a high-level impairment might be present in the dorsal visual pathway in autism, and furthermore, that this might account for weak central coherence, at least at the visuospatial level. We assessed the integrity of the dorsal visual pathway in children diagnosed with an autism spectrum disorder (ASD), and in typically developing children, using two visual tasks, one examining functioning at higher levels of the dorsal cortical stream (Global Dot Motion (GDM)), and the other assessing lower-level dorsal stream functioning (Flicker Contrast Sensitivity (FCS)). Central coherence was tested using the Children's Embedded Figures Test (CEFT). Relative to the typically developing children, the children with ASD had shorter CEFT latencies and higher GDM thresholds but equivalent FCS thresholds. Additionally, CEFT latencies were inversely related to GDM thresholds in the ASD group. These outcomes indicate that the elevated global motion thresholds in autism are the result of high-level impairments in dorsal cortical regions. Weak visuospatial coherence in autism may be in the form of abnormal cooperative mechanisms in extra-striate cortical areas, which might contribute to differential performance when processing stimuli as Gestalts, including both dynamic (i.e., global motion perception) and static (i.e., disembedding performance) stimuli.
Music, thinking, perceived motion: the emergence of Gestalt theory.
Wertheimer, Michael
2014-05-01
Histories of psychology typically assert that Gestalt theory began with the publication of Max Wertheimer's 1912b paper on the phi phenomenon, the compelling visual apparent motion of actually stationary stimuli. The current author discusses the origin of Gestalt theory, as told by the historical record starting with M. Wertheimer's upbringing and ending with his most recent Gestalt theories. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Galashan, Daniela; Fehr, Thorsten; Kreiter, Andreas K; Herrmann, Manfred
2014-07-11
Initially, human area MT+ was considered a visual area solely processing motion information but further research has shown that it is also involved in various different cognitive operations, such as working memory tasks requiring motion-related information to be maintained or cognitive tasks with implied or expected motion.In the present fMRI study in humans, we focused on MT+ modulation during working memory maintenance using a dynamic shape-tracking working memory task with no motion-related working memory content. Working memory load was systematically varied using complex and simple stimulus material and parametrically increasing retention periods. Activation patterns for the difference between retention of complex and simple memorized stimuli were examined in order to preclude that the reported effects are caused by differences in retrieval. Conjunction analysis over all delay durations for the maintenance of complex versus simple stimuli demonstrated a wide-spread activation pattern. Percent signal change (PSC) in area MT+ revealed a pattern with higher values for the maintenance of complex shapes compared to the retention of a simple circle and with higher values for increasing delay durations. The present data extend previous knowledge by demonstrating that visual area MT+ presents a brain activity pattern usually found in brain regions that are actively involved in working memory maintenance.
Insect Detection of Small Targets Moving in Visual Clutter
Barnett, Paul D; O'Carroll, David C
2006-01-01
Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly,Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. PMID:16448249
Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji
2017-01-01
We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka.
Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji
2017-01-01
We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka. PMID:28399163
Time perception of visual motion is tuned by the motor representation of human actions
Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry
2013-01-01
Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
Defining the computational structure of the motion detector in Drosophila.
Clark, Damon A; Bursztyn, Limor; Horowitz, Mark A; Schnitzer, Mark J; Clandinin, Thomas R
2011-06-23
Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt correlator (HRC), relates visual inputs to neural activity and behavioral responses to motion, but the circuits that implement this computation remain unknown. By using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, "reverse phi," that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection. Copyright © 2011 Elsevier Inc. All rights reserved.
Chuen, Lorraine; Schutz, Michael
2016-07-01
An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities.
Steady-state VEP responses to uncomfortable stimuli.
O'Hare, Louise
2017-02-01
Periodic stimuli, such as op-art, can evoke a range of aversive sensations included in the term visual discomfort. Illusory motion effects are elicited by fixational eye movements, but the cortex might also contribute to effects of discomfort. To investigate this possibility, steady-state visually evoked responses (SSVEPs) to contrast-matched op-art-based stimuli were measured at the same time as discomfort judgements. On average, discomfort reduced with increasing spatial frequency of the pattern. In contrast, the peak amplitude of the SSVEP response was around the midrange spatial frequencies. Like the discomfort judgements, SSVEP responses to the highest spatial frequencies were lowest amplitude, but the relationship breaks down between discomfort and SSVEP for the lower spatial frequency stimuli. This was not explicable by gross eye movements as measured using the facial electrodes. There was a weak relationship between the peak SSVEP responses and discomfort judgements for some stimuli, suggesting that discomfort can be explained in part by electrophysiological responses measured at the level of the cortex. However, there is a breakdown of this relationship in the case of lower spatial frequency stimuli, which remains unexplained. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Independent and additive repetition priming of motion direction and color in visual search.
Kristjánsson, Arni
2009-03-01
Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.
Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.
Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus
2013-12-01
Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.
Sugita, Norihiro; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Watanabe, Takashi; Chiba, Shigeru; Yambe, Tomoyuki; Nitta, Shin-ichi
2007-09-28
Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index rho(max), which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in rho(max) with time. The physiological index, rho(max), will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Auditory perception of a human walker.
Cottrell, David; Campbell, Megan E J
2014-01-01
When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.
Differential responses in dorsal visual cortex to motion and disparity depth cues
Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.
2013-01-01
We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808
Experience affects the use of ego-motion signals during 3D shape perception
Jain, Anshul; Backus, Benjamin T.
2011-01-01
Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132
Feature integration across space, time, and orientation
Otto, Thomas U.; Öğmen, Haluk; Herzog, Michael H.
2012-01-01
The perception of a visual target can be strongly influenced by flanking stimuli. In static displays, performance on the target improves when the distance to the flanking elements increases- proposedly because feature pooling and integration vanishes with distance. Here, we studied feature integration with dynamic stimuli. We show that features of single elements presented within a continuous motion stream are integrated largely independent of spatial distance (and orientation). Hence, space based models of feature integration cannot be extended to dynamic stimuli. We suggest that feature integration is guided by perceptual grouping operations that maintain the identity of perceptual objects over space and time. PMID:19968428
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Comparison of visual sensitivity to human and object motion in autism spectrum disorder.
Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie
2010-08-01
Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
Three Stages and Two Systems of Visual Processing
1989-01-01
as squaring do not, in and of themselves, imply second- order processing . For example, the Adelson and Bergen’s (1985) detector of directional motion...rectification, halfwave rectification is a second- order processing scheme. Figure 8. Stimuli for analyzing second- order processing . (a) An x,y,t representation of
Speed Biases With Real-Life Video Clips
Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875
Speed Biases With Real-Life Video Clips.
Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.
Kress, Daniel; Egelhaaf, Martin
2014-01-01
During locomotion animals rely heavily on visual cues gained from the environment to guide their behavior. Examples are basic behaviors like collision avoidance or the approach to a goal. The saccadic gaze strategy of flying flies, which separates translational from rotational phases of locomotion, has been suggested to facilitate the extraction of environmental information, because only image flow evoked by translational self-motion contains relevant distance information about the surrounding world. In contrast to the translational phases of flight during which gaze direction is kept largely constant, walking flies experience continuous rotational image flow that is coupled to their stride-cycle. The consequences of these self-produced image shifts for the extraction of environmental information are still unclear. To assess the impact of stride-coupled image shifts on visual information processing, we performed electrophysiological recordings from the HSE cell, a motion sensitive wide-field neuron in the blowfly visual system. This cell has been concluded to play a key role in mediating optomotor behavior, self-motion estimation and spatial information processing. We used visual stimuli that were based on the visual input experienced by walking blowflies while approaching a black vertical bar. The response of HSE to these stimuli was dominated by periodic membrane potential fluctuations evoked by stride-coupled image shifts. Nevertheless, during the approach the cell’s response contained information about the bar and its background. The response components evoked by the bar were larger than the responses to its background, especially during the last phase of the approach. However, as revealed by targeted modifications of the visual input during walking, the extraction of distance information on the basis of HSE responses is much impaired by stride-coupled retinal image shifts. Possible mechanisms that may cope with these stride-coupled responses are discussed. PMID:25309362
Effects of feature-selective and spatial attention at different stages of visual processing.
Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M
2011-01-01
We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.
Feature-selective attention: evidence for a decline in old age.
Quigley, Cliodhna; Andersen, Søren K; Schulze, Lars; Grunwald, Martin; Müller, Matthias M
2010-04-19
Although attention in older adults is an active research area, feature-selective aspects have not yet been explicitly studied. Here we report the results of an exploratory study involving directed changes in feature-selective attention. The stimuli used were two random dot kinematograms (RDKs) of different colours, superimposed and centrally presented. A colour cue with random onset after the beginning of each trial instructed young and older subjects to attend to one of the RDKs and detect short intervals of coherent motion while ignoring analogous motion events in the non-cued RDK. Behavioural data show that older adults could detect motion, but discriminated target from distracter motion less reliably than young adults. The method of frequency tagging allowed us to separate the EEG responses to the attended and ignored stimuli and directly compare steady-state visual evoked potential (SSVEP) amplitudes elicited by each stimulus before and after cue onset. We found that younger adults show a clear attentional enhancement of SSVEP amplitude in the post-cue interval, while older adults' SSVEP responses to attended and ignored stimuli do not differ. Thus, in situations where attentional selection cannot be spatially resolved, older adults show a deficit in selection that is not shared by young adults. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Nakamura, S; Shimojo, S
2000-01-01
We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.
When eyes drive hand: Influence of non-biological motion on visuo-motor coupling.
Thoret, Etienne; Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard
2016-01-26
Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Encodings of implied motion for animate and inanimate object categories in the two visual pathways.
Lu, Zhengang; Li, Xueting; Meng, Ming
2016-01-15
Previous research has proposed two separate pathways for visual processing: the dorsal pathway for "where" information vs. the ventral pathway for "what" information. Interestingly, the middle temporal cortex (MT) in the dorsal pathway is involved in representing implied motion from still pictures, suggesting an interaction between motion and object related processing. However, the relationship between how the brain encodes implied motion and how the brain encodes object/scene categories is unclear. To address this question, fMRI was used to measure activity along the two pathways corresponding to different animate and inanimate categories of still pictures with different levels of implied motion speed. In the visual areas of both pathways, activity induced by pictures of humans and animals was hardly modulated by the implied motion speed. By contrast, activity in these areas correlated with the implied motion speed for pictures of inanimate objects and scenes. The interaction between implied motion speed and stimuli category was significant, suggesting different encoding mechanisms of implied motion for animate-inanimate distinction. Further multivariate pattern analysis of activity in the dorsal pathway revealed significant effects of stimulus category that are comparable to the ventral pathway. Moreover, still pictures of inanimate objects/scenes with higher implied motion speed evoked activation patterns that were difficult to differentiate from those evoked by pictures of humans and animals, indicating a functional role of implied motion in the representation of object categories. These results provide novel evidence to support integrated encoding of motion and object categories, suggesting a rethink of the relationship between the two visual pathways. Copyright © 2015 Elsevier Inc. All rights reserved.
Interactions between motion and form processing in the human visual system.
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Interactions between motion and form processing in the human visual system
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286
Xie, Jun; Xu, Guanghua; Wang, Jing; Li, Min; Han, Chengcheng; Jia, Yaguang
Steady-state visual evoked potentials (SSVEP) based paradigm is a conventional BCI method with the advantages of high information transfer rate, high tolerance to artifacts and the robust performance across users. But the occurrence of mental load and fatigue when users stare at flickering stimuli is a critical problem in implementation of SSVEP-based BCIs. Based on electroencephalography (EEG) power indices α, θ, θ + α, ratio index θ/α and response properties of amplitude and SNR, this study quantitatively evaluated the mental load and fatigue in both of conventional flickering and the novel motion-reversal visual attention tasks. Results over nine subjects revealed significant mental load alleviation in motion-reversal task rather than flickering task. The interaction between factors of "stimulation type" and "fatigue level" also illustrated the motion-reversal stimulation as a superior anti-fatigue solution for long-term BCI operation. Taken together, our work provided an objective method favorable for the design of more practically applicable steady-state evoked potential based BCIs.
Selectivity to Translational Egomotion in Human Brain Motion Areas
Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare
2013-01-01
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096
A Unifying Motif for Spatial and Directional Surround Suppression.
Liu, Liu D; Miller, Kenneth D; Pack, Christopher C
2018-01-24
In the visual system, the response to a stimulus in a neuron's receptive field can be modulated by stimulus context, and the strength of these contextual influences vary with stimulus intensity. Recent work has shown how a theoretical model, the stabilized supralinear network (SSN), can account for such modulatory influences, using a small set of computational mechanisms. Although the predictions of the SSN have been confirmed in primary visual cortex (V1), its computational principles apply with equal validity to any cortical structure. We have therefore tested the generality of the SSN by examining modulatory influences in the middle temporal area (MT) of the macaque visual cortex, using electrophysiological recordings and pharmacological manipulations. We developed a novel stimulus that can be adjusted parametrically to be larger or smaller in the space of all possible motion directions. We found, as predicted by the SSN, that MT neurons integrate across motion directions for low-contrast stimuli, but that they exhibit suppression by the same stimuli when they are high in contrast. These results are analogous to those found in visual cortex when stimulus size is varied in the space domain. We further tested the mechanisms of inhibition using pharmacological manipulations of inhibitory efficacy. As predicted by the SSN, local manipulation of inhibitory strength altered firing rates, but did not change the strength of surround suppression. These results are consistent with the idea that the SSN can account for modulatory influences along different stimulus dimensions and in different cortical areas. SIGNIFICANCE STATEMENT Visual neurons are selective for specific stimulus features in a region of visual space known as the receptive field, but can be modulated by stimuli outside of the receptive field. The SSN model has been proposed to account for these and other modulatory influences, and tested in V1. As this model is not specific to any particular stimulus feature or brain region, we wondered whether similar modulatory influences might be observed for other stimulus dimensions and other regions. We tested for specific patterns of modulatory influences in the domain of motion direction, using electrophysiological recordings from MT. Our data confirm the predictions of the SSN in MT, suggesting that the SSN computations might be a generic feature of sensory cortex. Copyright © 2018 the authors 0270-6474/18/380989-11$15.00/0.
Haltere mechanosensory influence on tethered flight behavior in Drosophila.
Mureli, Shwetha; Fox, Jessica L
2015-08-01
In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.
Attention and apparent motion.
Horowitz, T; Treisman, A
1994-01-01
Two dissociations between short- and long-range motion in visual search are reported. Previous research has shown parallel processing for short-range motion and apparently serial processing for long-range motion. This finding has been replicated and it has also been found that search for short-range targets can be impaired both by using bicontrast stimuli, and by prior adaptation to the target direction of motion. Neither factor impaired search in long-range motion displays. Adaptation actually facilitated search with long-range displays, which is attributed to response-level effects. A feature-integration account of apparent motion is proposed. In this theory, short-range motion depends on specialized motion feature detectors operating in parallel across the display, but subject to selective adaptation, whereas attention is needed to link successive elements when they appear at greater separations, or across opposite contrasts.
Extracting Depth From Motion Parallax in Real-World and Synthetic Displays
NASA Technical Reports Server (NTRS)
Hecht, Heiko; Kaiser, Mary K.; Aiken, William; Null, Cynthia H. (Technical Monitor)
1994-01-01
In psychophysical studies on human sensitivity to visual motion parallax (MP), the use of computer displays is pervasive. However, a number of potential problems are associated with such displays: cue conflicts arise when observers accommodate to the screen surface, and observer head and body movements are often not reflected in the displays. We investigated observers' sensitivity to depth information in MP (slant, depth order, relative depth) using various real-world displays and their computer-generated analogs. Angle judgments of real-world stimuli were consistently superior to judgments that were based on computer-generated stimuli. Similar results were found for perceived depth order and relative depth. Perceptual competence of observers tends to be underestimated in research that is based on computer generated displays. Such findings cannot be generalized to more realistic viewing situations.
1992-01-01
results in stimulation of spatial-motion-location visual processes, which are known to take precedence over any other sensor or cognitive stimuli. In...or version he is flying. This was initially an observation that stimulated the birth of the human-factors engineering discipline during World War H...collisions with the surface, the pilot needs inputs to sensory channels other than the focal visual system. Properly designed auditory and
Kiryu, Tohru; So, Richard H Y
2007-09-25
Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution.
Kiryu, Tohru; So, Richard HY
2007-01-01
Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution. PMID:17894857
Minimization of Retinal Slip Cannot Explain Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Null, Cynthia H. (Technical Monitor)
1998-01-01
Existing models assume that pursuit attempts a direct minimization of retinal image motion or "slip" (e.g. Robinson et al., 1986; Krauzlis & Weisberger, 1989). Using occluded line-figure stimuli, we have previously shown that humans can accurately pursue stimuli for which perfect tracking does not zero retinal slip (Neurologic ARCO). These findings are inconsistent with the standard control strategy of matching eye motion to a target-motion signal reconstructed by adding retinal slip and eye motion, but consistent with a visual front-end which estimates target motion via a global spatio-temporal integration for pursuit and perception. Another possible explanation is that pursuit simply attempts to minimize slip perpendicular to the segments (and neglects parallel "sliding" motion). To resolve this, 4 observers (3 naive) were asked to pursue the center of 2 types of stimuli with identical velocity-space descriptions and matched motion energy. The line-figure "diamond" stimulus was viewed through 2 invisible 3 deg-wide vertical apertures (38 cd/m2 equal to background) such that only the sinusoidal motion of 4 oblique line segments (44 cd/m2 was visible. The "cross" was identical except that the segments exchanged positions. Two trajectories (8's and infinity's) with 4 possible initial directions were randomly interleaved (1.25 cycles, 2.5s period, Ax = Ay = 1.4 deg). In 91% of trials, the diamond appeared rigid. Correspondingly, pursuit was vigorous (mean Again: 0.74) with a V/H aspect ratio approx. 1 (mean: 0.9). Despite a valid rigid solution, the cross however appeared rigid in 8% of trials. Correspondingly, pursuit was weaker (mean Hgain: 0.38) with an incorrect aspect ratio (mean: 1.5). If pursuit were just minimizing perpendicular slip, performance would be the same in both conditions.
The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
2009-11-18
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion
Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.
2011-01-01
Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834
The reference frame for encoding and retention of motion depends on stimulus set size.
Huynh, Duong; Tripathy, Srimant P; Bedell, Harold E; Öğmen, Haluk
2017-04-01
The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.
Brain white matter microstructure is associated with susceptibility to motion-induced nausea.
Napadow, V; Sheehan, J; Kim, J; Dassatti, A; Thurler, A H; Surjanhata, B; Vangel, M; Makris, N; Schaechter, J D; Kuo, B
2013-05-01
Nausea is associated with significant morbidity, and there is a wide range in the propensity of individuals to experience nausea. The neural basis of the heterogeneity in nausea susceptibility is poorly understood. Our previous functional magnetic resonance imaging (fMRI) study in healthy adults showed that a visual motion stimulus caused activation in the right MT+/V5 area, and that increased sensation of nausea due to this stimulus was associated with increased activation in the right anterior insula. For the current study, we hypothesized that individual differences in visual motion-induced nausea are due to microstructural differences in the inferior fronto-occipital fasciculus (IFOF), the white matter tract connecting the right visual motion processing area (MT+/V5) and right anterior insula. To test this hypothesis, we acquired diffusion tensor imaging data from 30 healthy adults who were subsequently dichotomized into high and low nausea susceptibility groups based on the Motion Sickness Susceptibility Scale. We quantified diffusion along the IFOF for each subject based on axial diffusivity (AD); radial diffusivity (RD), mean diffusivity (MD) and fractional anisotropy (FA), and evaluated between-group differences in these diffusion metrics. Subjects with high susceptibility to nausea rated significantly (P < 0.001) higher nausea intensity to visual motion stimuli and had significantly (P < 0.05) lower AD and MD along the right IFOF compared to subjects with low susceptibility to nausea. This result suggests that differences in white matter microstructure within tracts connecting visual motion and nausea-processing brain areas may contribute to nausea susceptibility or may have resulted from an increased history of nausea episodes. © 2013 Blackwell Publishing Ltd.
Parkinson, Rachel H; Little, Jacelyn M; Gray, John R
2017-04-20
Neonicotinoids are known to affect insect navigation and vision, however the mechanisms of these effects are not fully understood. A visual motion sensitive neuron in the locust, the Descending Contralateral Movement Detector (DCMD), integrates visual information and is involved in eliciting escape behaviours. The DCMD receives coded input from the compound eyes and monosynaptically excites motorneurons involved in flight and jumping. We show that imidacloprid (IMD) impairs neural responses to visual stimuli at sublethal concentrations, and these effects are sustained two and twenty-four hours after treatment. Most significantly, IMD disrupted bursting, a coding property important for motion detection. Specifically, IMD reduced the DCMD peak firing rate within bursts at ecologically relevant doses of 10 ng/g (ng IMD per g locust body weight). Effects on DCMD firing translate to deficits in collision avoidance behaviours: exposure to 10 ng/g IMD attenuates escape manoeuvers while 100 ng/g IMD prevents the ability to fly and walk. We show that, at ecologically-relevant doses, IMD causes significant and lasting impairment of an important pathway involved with visual sensory coding and escape behaviours. These results show, for the first time, that a neonicotinoid pesticide directly impairs an important, taxonomically conserved, motion-sensitive visual network.
Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M
2003-03-01
A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.
Kubo, Fumi; Hablitzel, Bastian; Dal Maschio, Marco; Driever, Wolfgang; Baier, Herwig; Arrenberg, Aristides B
2014-03-19
Animals respond to whole-field visual motion with compensatory eye and body movements in order to stabilize both their gaze and position with respect to their surroundings. In zebrafish, rotational stimuli need to be distinguished from translational stimuli to drive the optokinetic and the optomotor responses, respectively. Here, we systematically characterize the neural circuits responsible for these operations using a combination of optogenetic manipulation and in vivo calcium imaging during optic flow stimulation. By recording the activity of thousands of neurons within the area pretectalis (APT), we find four bilateral pairs of clusters that process horizontal whole-field motion and functionally classify eleven prominent neuron types with highly selective response profiles. APT neurons are prevalently direction selective, either monocularly or binocularly driven, and hierarchically organized to distinguish between rotational and translational optic flow. Our data predict a wiring diagram of a neural circuit tailored to drive behavior that compensates for self-motion. Copyright © 2014 Elsevier Inc. All rights reserved.
Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U
2016-01-01
The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.
Kerzel, Dirk
2003-05-01
Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.
Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang
2015-07-15
Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.
Adaptation disrupts motion integration in the primate dorsal stream
Patterson, Carlyn A.; Wissig, Stephanie C.; Kohn, Adam
2014-01-01
Summary Sensory systems adjust continuously to the environment. The effects of recent sensory experience—or adaptation—are typically assayed by recording in a relevant subcortical or cortical network. However, adaptation effects cannot be localized to a single, local network. Adjustments in one circuit or area will alter the input provided to others, with unclear consequences for computations implemented in the downstream circuit. Here we show that prolonged adaptation with drifting gratings, which alters responses in the early visual system, impedes the ability of area MT neurons to integrate motion signals in plaid stimuli. Perceptual experiments reveal a corresponding loss of plaid coherence. A simple computational model shows how the altered representation of motion signals in early cortex can derail integration in MT. Our results suggest that the effects of adaptation cascade through the visual system, derailing the downstream representation of distinct stimulus attributes. PMID:24507198
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Sensitivity of neurons in the middle temporal area of marmoset monkeys to random dot motion.
Chaplin, Tristan A; Allitt, Benjamin J; Hagan, Maureen A; Price, Nicholas S C; Rajan, Ramesh; Rosa, Marcello G P; Lui, Leo L
2017-09-01
Neurons in the middle temporal area (MT) of the primate cerebral cortex respond to moving visual stimuli. The sensitivity of MT neurons to motion signals can be characterized by using random-dot stimuli, in which the strength of the motion signal is manipulated by adding different levels of noise (elements that move in random directions). In macaques, this has allowed the calculation of "neurometric" thresholds. We characterized the responses of MT neurons in sufentanil/nitrous oxide-anesthetized marmoset monkeys, a species that has attracted considerable recent interest as an animal model for vision research. We found that MT neurons show a wide range of neurometric thresholds and that the responses of the most sensitive neurons could account for the behavioral performance of macaques and humans. We also investigated factors that contributed to the wide range of observed thresholds. The difference in firing rate between responses to motion in the preferred and null directions was the most effective predictor of neurometric threshold, whereas the direction tuning bandwidth had no correlation with the threshold. We also showed that it is possible to obtain reliable estimates of neurometric thresholds using stimuli that were not highly optimized for each neuron, as is often necessary when recording from large populations of neurons with different receptive field concurrently, as was the case in this study. These results demonstrate that marmoset MT shows an essential physiological similarity to macaque MT and suggest that its neurons are capable of representing motion signals that allow for comparable motion-in-noise judgments. NEW & NOTEWORTHY We report the activity of neurons in marmoset MT in response to random-dot motion stimuli of varying coherence. The information carried by individual MT neurons was comparable to that of the macaque, and the maximum firing rates were a strong predictor of sensitivity. Our study provides key information regarding the neural basis of motion perception in the marmoset, a small primate species that is becoming increasingly popular as an experimental model. Copyright © 2017 the American Physiological Society.
Fixation not required: characterizing oculomotor attention capture for looming stimuli.
Lewis, Joanna E; Neider, Mark B
2015-10-01
A stimulus moving toward us, such as a ball being thrown in our direction or a vehicle braking suddenly in front of ours, often represents a stimulus that requires a rapid response. Using a visual search task in which target and distractor items were systematically associated with a looming object, we explored whether this sort of looming motion captures attention, the nature of such capture using eye movement measures (overt/covert), and the extent to which such capture effects are more closely tied to motion onset or the motion itself. We replicated previous findings indicating that looming motion induces response time benefits and costs during visual search Lin, Franconeri, & Enns(Psychological Science 19(7): 686-693, 2008). These differences in response times were independent of fixation, indicating that these capture effects did not necessitate overt attentional shifts to a looming object for search benefits or costs to occur. Interestingly, we found no differences in capture benefits and costs associated with differences in looming motion type. Combined, our results suggest that capture effects associated with looming motion are more likely subserved by covert attentional mechanisms rather than overt mechanisms, and attention capture for looming motion is likely related to motion itself rather than the onset of motion.
Dual processing of visual rotation for bipedal stance control.
Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene
2016-10-01
When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
Yu, Tzu-Ying; Jacobs, Robert J.; Anstice, Nicola S.; Paudel, Nabin; Harding, Jane E.; Thompson, Benjamin
2013-01-01
Purpose. We developed and validated a technique for measuring global motion perception in 2-year-old children, and assessed the relationship between global motion perception and other measures of visual function. Methods. Random dot kinematogram (RDK) stimuli were used to measure motion coherence thresholds in 366 children at risk of neurodevelopmental problems at 24 ± 1 months of age. RDKs of variable coherence were presented and eye movements were analyzed offline to grade the direction of the optokinetic reflex (OKR) for each trial. Motion coherence thresholds were calculated by fitting psychometric functions to the resulting datasets. Test–retest reliability was assessed in 15 children, and motion coherence thresholds were measured in a group of 10 adults using OKR and behavioral responses. Standard age-appropriate optometric tests also were performed. Results. Motion coherence thresholds were measured successfully in 336 (91.8%) children using the OKR technique, but only 31 (8.5%) using behavioral responses. The mean threshold was 41.7 ± 13.5% for 2-year-old children and 3.3 ± 1.2% for adults. Within-assessor reliability and test–retest reliability were high in children. Children's motion coherence thresholds were significantly correlated with stereoacuity (LANG I & II test, ρ = 0.29, P < 0.001; Frisby, ρ = 0.17, P = 0.022), but not with binocular visual acuity (ρ = 0.11, P = 0.07). In adults OKR and behavioral motion coherence thresholds were highly correlated (intraclass correlation = 0.81, P = 0.001). Conclusions. Global motion perception can be measured in 2-year-old children using the OKR. This technique is reliable and data from adults suggest that motion coherence thresholds based on the OKR are related to motion perception. Global motion perception was related to stereoacuity in children. PMID:24282224
Matsumoto, Yukiko; Takahashi, Hideyuki; Murai, Toshiya; Takahashi, Hidehiko
2015-01-01
Schizophrenia patients have impairments at several levels of cognition including visual attention (eye movements), perception, and social cognition. However, it remains unclear how lower-level cognitive deficits influence higher-level cognition. To elucidate the hierarchical path linking deficient cognitions, we focused on biological motion perception, which is involved in both the early stage of visual perception (attention) and higher social cognition, and is impaired in schizophrenia. Seventeen schizophrenia patients and 18 healthy controls participated in the study. Using point-light walker stimuli, we examined eye movements during biological motion perception in schizophrenia. We assessed relationships among eye movements, biological motion perception and empathy. In the biological motion detection task, schizophrenia patients showed lower accuracy and fixated longer than healthy controls. As opposed to controls, patients exhibiting longer fixation durations and fewer numbers of fixations demonstrated higher accuracy. Additionally, in the patient group, the correlations between accuracy and affective empathy index and between eye movement index and affective empathy index were significant. The altered gaze patterns in patients indicate that top-down attention compensates for impaired bottom-up attention. Furthermore, aberrant eye movements might lead to deficits in biological motion perception and finally link to social cognitive impairments. The current findings merit further investigation for understanding the mechanism of social cognitive training and its development. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Encoding model of temporal processing in human visual cortex.
Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit
2017-12-19
How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.
Implied dynamics biases the visual perception of velocity.
La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo
2014-01-01
We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform.
NASA Technical Reports Server (NTRS)
Harm, D. L.; Parker, D. E.
1993-01-01
The research described in this paper is intended to support development and evaluation of preflight adaptation training (PAT) apparatus and procedures. Successful training depends on appropriate manipulation of visual and inertial stimuli that control perception of self-motion and self-orientation. For one part of this process, astronauts are trained to report their self-motion and self-orientation experiences. Before their space mission, they are exposed to the altered sensory environments produced by the PAT trainers. During and after the mission, they report their motion and orientation experiences. Subsequently, they are again exposed to the PAT trainers and are asked to describe relationships between their experiences in microgravity and following entry and their experiences in the trainers.
Ventral aspect of the visual form pathway is not critical for the perception of biological motion
Gilaie-Dotan, Sharon; Saygin, Ayse Pinar; Lorenzi, Lauren J.; Rees, Geraint; Behrmann, Marlene
2015-01-01
Identifying the movements of those around us is fundamental for many daily activities, such as recognizing actions, detecting predators, and interacting with others socially. A key question concerns the neurobiological substrates underlying biological motion perception. Although the ventral “form” visual cortex is standardly activated by biologically moving stimuli, whether these activations are functionally critical for biological motion perception or are epiphenomenal remains unknown. To address this question, we examined whether focal damage to regions of the ventral visual cortex, resulting in significant deficits in form perception, adversely affects biological motion perception. Six patients with damage to the ventral cortex were tested with sensitive point-light display paradigms. All patients were able to recognize unmasked point-light displays and their perceptual thresholds were not significantly different from those of three different control groups, one of which comprised brain-damaged patients with spared ventral cortex (n > 50). Importantly, these six patients performed significantly better than patients with damage to regions critical for biological motion perception. To assess the necessary contribution of different regions in the ventral pathway to biological motion perception, we complement the behavioral findings with a fine-grained comparison between the lesion location and extent, and the cortical regions standardly implicated in biological motion processing. This analysis revealed that the ventral aspects of the form pathway (e.g., fusiform regions, ventral extrastriate body area) are not critical for biological motion perception. We hypothesize that the role of these ventral regions is to provide enhanced multiview/posture representations of the moving person rather than to represent biological motion perception per se. PMID:25583504
Parallel search for conjunctions with stimuli in apparent motion.
Casco, C; Ganis, G
1999-01-01
A series of experiments was conducted to determine whether apparent motion tends to follow the similarity rule (i.e. is attribute-specific) and to investigate the underlying mechanism. Stimulus duration thresholds were measured during a two-alternative forced-choice task in which observers detected either the location or the motion direction of target groups defined by the conjunction of size and orientation. Target element positions were randomly chosen within a nominally defined rectangular subregion of the display (target region). The target region was presented either statically (followed by a 250 ms duration mask) or dynamically, displaced by a small distance (18 min of arc) from frame to frame. In the motion display, the position of both target and background elements was changed randomly from frame to frame within the respective areas to abolish spatial correspondence over time. Stimulus duration thresholds were lower in the motion than in the static task, indicating that target detection in the dynamic condition does not rely on the explicit identification of target elements in each static frame. Increasing the distractor-to-target ratio was found to reduce detectability in the static, but not in the motion task. This indicates that the perceptual segregation of the target is effortless and parallel with motion but not with static displays. The pattern of results holds regardless of the task or search paradigm employed. The detectability in the motion condition can be improved by increasing the number of frames and/or by reducing the width of the target area. Furthermore, parallel search in the dynamic condition can be conducted with both short-range and long-range motion stimuli. Finally, apparent motion of conjunctions is insufficient on its own to support location decision and is disrupted by random visual noise. Overall, these findings show that (i) the mechanism underlying apparent motion is attribute-specific; (ii) the motion system mediates temporal integration of feature conjunctions before they are identified by the static system; and (iii) target detectability in these stimuli relies upon a nonattentive, cooperative, directionally selective motion mechanism that responds to high-level attributes (conjunction of size and orientation).
Eye movement instructions modulate motion illusion and body sway with Op Art.
Kapoula, Zoï; Lang, Alexandre; Vernet, Marine; Locher, Paul
2015-01-01
Op Art generates illusory visual motion. It has been proposed that eye movements participate in such illusion. This study examined the effect of eye movement instructions (fixation vs. free exploration) on the sensation of motion as well as the body sway of subjects viewing Op Art paintings. Twenty-eight healthy adults in orthostatic stance were successively exposed to three visual stimuli consisting of one figure representing a cross (baseline condition) and two Op Art paintings providing sense of motion in depth-Bridget Riley's Movements in Squares and Akiyoshi Kitaoka's Rollers. Before their exposure to the Op Art images, participants were instructed either to fixate at the center of the image (fixation condition) or to explore the artwork (free viewing condition). Posture was measured for 30 s per condition using a body fixed sensor (accelerometer). The major finding of this study is that the two Op Art paintings induced a larger antero-posterior body sway both in terms of speed and displacement and an increased motion illusion in the free viewing condition as compared to the fixation condition. For body sway, this effect was significant for the Riley painting, while for motion illusion this effect was significant for Kitaoka's image. These results are attributed to macro-saccades presumably occurring under free viewing instructions, and most likely to the small vergence drifts during fixations following the saccades; such movements in interaction with visual properties of each image would increase either the illusory motion sensation or the antero-posterior body sway.
Perceptual grouping across eccentricity.
Tannazzo, Teresa; Kurylo, Daniel D; Bukhari, Farhan
2014-10-01
Across the visual field, progressive differences exist in neural processing as well as perceptual abilities. Expansion of stimulus scale across eccentricity compensates for some basic visual capacities, but not for high-order functions. It was hypothesized that as with many higher-order functions, perceptual grouping ability should decline across eccentricity. To test this prediction, psychophysical measurements of grouping were made across eccentricity. Participants indicated the dominant grouping of dot grids in which grouping was based upon luminance, motion, orientation, or proximity. Across trials, the organization of stimuli was systematically decreased until perceived grouping became ambiguous. For all stimulus features, grouping ability remained relatively stable until 40°, beyond which thresholds significantly elevated. The pattern of change across eccentricity varied across stimulus feature, in which stimulus scale, dot size, or stimulus size interacted with eccentricity effects. These results demonstrate that perceptual grouping of such stimuli is not reliant upon foveal viewing, and suggest that selection of dominant grouping patterns from ambiguous displays operates similarly across much of the visual field. Copyright © 2014 Elsevier Ltd. All rights reserved.
Task relevance predicts gaze in videos of real moving scenes.
Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C
2011-09-01
Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.
Samar, Vincent J; Parasnis, Ila
2007-12-01
Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. Brain and Cognition, 49, 170-181; Samar, V. J., & Parasnis, I. (2005). Dorsal stream deficits suggest hidden dyslexia among deaf poor readers: Correlated evidence from reduced perceptual speed and elevated coherent motion detection thresholds. Brain and Cognition, 58, 300-311.] reported a small, non-significant RVF advantage for deaf signers when short duration motion stimuli were used (200-250 ms). Samar and Parasnis (2005) reported that this small RVF advantage became significant when non-verbal IQ was statistically controlled. This paper presents extended analyses of the correlation between non-verbal IQ and visual field asymmetries in the data set of Samar and Parasnis (2005). We speculate that this correlation might plausibly be driven by individual differences either in age of acquisition of American Sign Language (ASL) or in the degree of neurodevelopmental insult associated with various etiologies of deafness. Limited additional analyses are presented that indicate a need for further research on the cause of this apparent IQ-laterality relationship. Some potential implications of this relationship for lateralization studies of deaf signers are discussed. Controlling non-verbal IQ may improve the reliability of short duration coherent motion tasks to detect adaptive dorsal stream lateralization due to exposure to ASL in deaf research participants.
Emotion categorization of body expressions in narrative scenarios
Volkova, Ekaterina P.; Mohler, Betty J.; Dodds, Trevor J.; Tesch, Joachim; Bülthoff, Heinrich H.
2014-01-01
Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the “neutral” category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios. PMID:25071623
A method for real-time visual stimulus selection in the study of cortical object perception.
Leeds, Daniel D; Tarr, Michael J
2016-06-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.
A method for real-time visual stimulus selection in the study of cortical object perception
Leeds, Daniel D.; Tarr, Michael J.
2016-01-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168
Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang
2017-08-14
As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.
Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.
2015-01-01
Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373
Creating stimuli for the study of biological-motion perception.
Dekeyser, Mathias; Verfaillie, Karl; Vanrie, Jan
2002-08-01
In the perception of biological motion, the stimulus information is confined to a small number of lights attached to the major joints of a moving person. Despite this drastic degradation of the stimulus information, the human visual apparatus organizes the swarm of moving dots into a vivid percept of a moving biological creature. Several techniques have been proposed to create point-light stimuli: placing dots at strategic locations on photographs or films, video recording a person with markers attached to the body, computer animation based on artificial synthesis, and computer animation based on motion-capture data. A description is given of the technique we are currently using in our laboratory to produce animated point-light figures. The technique is based on a combination of motion capture and three-dimensional animation software (Character Studio, Autodesk, Inc., 1998). Some of the advantages of our approach are that the same actions can be shown from any viewpoint, that point-light versions, as well as versions with a full-fleshed character, can be created of the same actions, and that point lights can indicate the center of a joint (thereby eliminating several disadvantages associated with other techniques).
Caudate clues to rewarding cues.
Platt, Michael L
2002-01-31
Behavioral studies indicate that prior experience can influence discrimination of subsequent stimuli. The mechanisms responsible for highlighting a particular aspect of the stimulus, such as motion or color, as most relevant and thus deserving further scrutiny, however, remain poorly understood. In the current issue of Neuron, demonstrate that neurons in the caudate nucleus of the basal ganglia signal which dimension of a visual cue, either color or location, is associated with reward in an eye movement task. These findings raise the possibility that this structure participates in the reward-based control of visual attention.
Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa
2011-07-01
Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.
Woo, Kevin L; Rieucau, Guillaume
2008-07-01
The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.
Spatio-Temporal Dynamics of Impulse Responses to Figure Motion in Optic Flow Neurons
Lee, Yu-Jen; Jönsson, H. Olof; Nordström, Karin
2015-01-01
White noise techniques have been used widely to investigate sensory systems in both vertebrates and invertebrates. White noise stimuli are powerful in their ability to rapidly generate data that help the experimenter decipher the spatio-temporal dynamics of neural and behavioral responses. One type of white noise stimuli, maximal length shift register sequences (m-sequences), have recently become particularly popular for extracting response kernels in insect motion vision. We here use such m-sequences to extract the impulse responses to figure motion in hoverfly lobula plate tangential cells (LPTCs). Figure motion is behaviorally important and many visually guided animals orient towards salient features in the surround. We show that LPTCs respond robustly to figure motion in the receptive field. The impulse response is scaled down in amplitude when the figure size is reduced, but its time course remains unaltered. However, a low contrast stimulus generates a slower response with a significantly longer time-to-peak and half-width. Impulse responses in females have a slower time-to-peak than males, but are otherwise similar. Finally we show that the shapes of the impulse response to a figure and a widefield stimulus are very similar, suggesting that the figure response could be coded by the same input as the widefield response. PMID:25955416
The Neural Circuit Mechanisms Underlying the Retinal Response to Motion Reversal
Chen, Eric Y.; Chou, Janice; Park, Jeongsook; Schwartz, Greg
2014-01-01
To make up for delays in visual processing, retinal circuitry effectively predicts that a moving object will continue moving in a straight line, allowing retinal ganglion cells to anticipate the object's position. However, a sudden reversal of motion triggers a synchronous burst of firing from a large group of ganglion cells, possibly signaling a violation of the retina's motion prediction. To investigate the neural circuitry underlying this response, we used a combination of multielectrode array and whole-cell patch recordings to measure the responses of individual retinal ganglion cells in the tiger salamander to reversing stimuli. We found that different populations of ganglion cells were responsible for responding to the reversal of different kinds of objects, such as bright versus dark objects. Using pharmacology and designed stimuli, we concluded that ON and OFF bipolar cells both contributed to the reversal response, but that amacrine cells had, at best, a minor role. This allowed us to formulate an adaptive cascade model (ACM), similar to the one previously used to describe ganglion cell responses to motion onset. By incorporating the ON pathway into the ACM, we were able to reproduce the time-varying firing rate of fast OFF ganglion cells for all experimentally tested stimuli. Analysis of the ACM demonstrates that bipolar cell gain control is primarily responsible for generating the synchronized retinal response, as individual bipolar cells require a constant time delay before recovering from gain control. PMID:25411485
“Global” visual training and extent of transfer in amblyopic macaque monkeys
Kiorpes, Lynne; Mangal, Paul
2015-01-01
Perceptual learning is gaining acceptance as a potential treatment for amblyopia in adults and children beyond the critical period. Many perceptual learning paradigms result in very specific improvement that does not generalize beyond the training stimulus, closely related stimuli, or visual field location. To be of use in amblyopia, a less specific effect is needed. To address this problem, we designed a more general training paradigm intended to effect improvement in visual sensitivity across tasks and domains. We used a “global” visual stimulus, random dot motion direction discrimination with 6 training conditions, and tested for posttraining improvement on a motion detection task and 3 spatial domain tasks (contrast sensitivity, Vernier acuity, Glass pattern detection). Four amblyopic macaques practiced the motion discrimination with their amblyopic eye for at least 20,000 trials. All showed improvement, defined as a change of at least a factor of 2, on the trained task. In addition, all animals showed improvements in sensitivity on at least some of the transfer test conditions, mainly the motion detection task; transfer to the spatial domain was inconsistent but best at fine spatial scales. However, the improvement on the transfer tasks was largely not retained at long-term follow-up. Our generalized training approach is promising for amblyopia treatment, but sustaining improved performance may require additional intervention. PMID:26505868
Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M
2009-01-01
Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.
VEP Responses to Op-Art Stimuli
O’Hare, Louise; Clarke, Alasdair D. F.; Pollux, Petra M. J.
2015-01-01
Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast. PMID:26422207
VEP Responses to Op-Art Stimuli.
O'Hare, Louise; Clarke, Alasdair D F; Pollux, Petra M J
2015-01-01
Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Spering, Miriam; Carrasco, Marisa
2012-01-01
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238
Spering, Miriam; Carrasco, Marisa
2012-05-30
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.
Neuroimaging Evidence for 2 Types of Plasticity in Association with Visual Perceptual Learning.
Shibata, Kazuhisa; Sasaki, Yuka; Kawato, Mitsuo; Watanabe, Takeo
2016-09-01
Visual perceptual learning (VPL) is long-term performance improvement as a result of perceptual experience. It is unclear whether VPL is associated with refinement in representations of the trained feature (feature-based plasticity), improvement in processing of the trained task (task-based plasticity), or both. Here, we provide empirical evidence that VPL of motion detection is associated with both types of plasticity which occur predominantly in different brain areas. Before and after training on a motion detection task, subjects' neural responses to the trained motion stimuli were measured using functional magnetic resonance imaging. In V3A, significant response changes after training were observed specifically to the trained motion stimulus but independently of whether subjects performed the trained task. This suggests that the response changes in V3A represent feature-based plasticity in VPL of motion detection. In V1 and the intraparietal sulcus, significant response changes were found only when subjects performed the trained task on the trained motion stimulus. This suggests that the response changes in these areas reflect task-based plasticity. These results collectively suggest that VPL of motion detection is associated with the 2 types of plasticity, which occur in different areas and therefore have separate mechanisms at least to some degree. © The Author 2016. Published by Oxford University Press.
The Verriest Lecture: Color lessons from space, time, and motion
Shevell, Steven K.
2012-01-01
The appearance of a chromatic stimulus depends on more than the wavelengths composing it. The scientific literature has countless examples showing that spatial and temporal features of light influence the colors we see. Studying chromatic stimuli that vary over space, time or direction of motion has a further benefit beyond predicting color appearance: the unveiling of otherwise concealed neural processes of color vision. Spatial or temporal stimulus variation uncovers multiple mechanisms of brightness and color perception at distinct levels of the visual pathway. Spatial variation in chromaticity and luminance can change perceived three-dimensional shape, an example of chromatic signals that affect a percept other than color. Chromatic objects in motion expose the surprisingly weak link between the chromaticity of objects and their physical direction of motion, and the role of color in inducing an illusory motion direction. Space, time and motion – color’s colleagues – reveal the richness of chromatic neural processing. PMID:22330398
NASA Technical Reports Server (NTRS)
Homick, J. L.; Reschke, M. F.; Vanderploeg, J. M.
1984-01-01
Better methods for the prediction, prevention, and treatment of the space adaptation syndome (SAS) were developed. A systematic, long range program of operationally oriented data collection on all individuals flying space shuttle missions was initiated. Preflight activities include the use of a motion experience questionnaire, laboratory tests of susceptibility to motion sickness induced by Coriolis stimuli and determinations of antimotion sickness drug efficacy and side effects. During flight, each crewmember is required to provide a daily report of symptom status, use of medications, and other vestibular related sensations. Additional data are obtained postflight. During the first nine shuttle missions, the reported incidence of SAS has been48%. Self-induced head motions and unusual visual orientation attitudes appear to be the principal triggering stimuli. Antimotion sickness medication, was of limited therapeutic value. Complete recovery from symptoms occurred by mission day three or four. Also of relevance is the lack of a statistically significant correlation between the ground based Coriolis test and SAS. The episodes of SAS have resulted in no impact to shuttle mission objectives and, no significant impact to mission timelines.
Weyand, T G; Gafka, A C
2001-01-01
We studied the visuomotor activity of corticotectal (CT) cells in two visual cortical areas [area 17 and the posteromedial lateral suprasylvian cortex (PMLS)] of the cat. The cats were trained in simple oculomotor tasks, and head position was fixed. Most CT cells in both cortical areas gave a vigorous discharge to a small stimulus used to control gaze when it fell within the retinotopically defined visual field. However, the vigor of the visual response did not predict latency to initiate a saccade, saccade velocity, amplitude, or even if a saccade would be made, minimizing any potential role these cells might have in premotor or attentional processes. Most CT cells in both areas were selective for direction of stimulus motion, and cells in PMLS showed a direction preference favoring motion away from points of central gaze. CT cells did not discharge with eye movements in the dark. During eye movements in the light, many CT cells in area 17 increased their activity. In contrast, cells in PMLS, including CT cells, were generally unresponsive during saccades. Paradoxically, cells in PMLS responded vigorously to stimuli moving at saccadic velocities, indicating that the oculomotor system suppresses visual activity elicited by moving the retina across an illuminated scene. Nearly all CT cells showed oscillatory activity in the frequency range of 20-90 Hz, especially in response to visual stimuli. However, this activity was capricious; strong oscillations in one trial could disappear in the next despite identical stimulus conditions. Although the CT cells in both of these regions share many characteristics, the direction anisotropy and the suppression of activity during eye movements which characterize the neurons in PMLS suggests that these two areas have different roles in facilitating perceptual/motor processes at the level of the superior colliculus.
1991-08-15
Conversely, displays Atr con- past experience to the experimental stimuli. structed %xith normal density- controlled KDE cues but %ith 5. Excluding...frame. This 3Ndisplays, gray background is displayed’ on ail introduces 50% -scintillation (density control lion even frames (labelled 1:0). Other non ...video tapes were prepared, each of whsich contained all the experimental ASL signs but distributed 1 2 3 4 into dliffereint. filter groups . Eight
Sysoeva, Olga V; Galuta, Ilia A; Davletshina, Maria S; Orekhova, Elena V; Stroganova, Tatiana A
2017-01-01
Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD-a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 < IQ < 127) and 44 typically developing (TD) boys, aged 6-15 years. The stimuli of small (1°) and large (12°) radius were presented under high (100%) and low (1%) contrast conditions. Social Responsiveness Scale and Sensory Profile Questionnaire were used to assess the autism severity and sensory processing abnormalities. We found that the SS index was atypically reduced, while SF index abnormally enhanced in children with ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD.
Are face representations depth cue invariant?
Dehmoobadsharifabadi, Armita; Farivar, Reza
2016-06-01
The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.
Suppressive mechanisms in visual motion processing: from perception to intelligence
Tadin, Duje
2015-01-01
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and those with schizophrenia—a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. PMID:26299386
Vision and air flow combine to streamline flying honeybees
Taylor, Gavin J.; Luu, Tien; Ball, David; Srinivasan, Mandyam V.
2013-01-01
Insects face the challenge of integrating multi-sensory information to control their flight. Here we study a ‘streamlining' response in honeybees, whereby honeybees raise their abdomen to reduce drag. We find that this response, which was recently reported to be mediated by optic flow, is also strongly modulated by the presence of air flow simulating a head wind. The Johnston's organs in the antennae were found to play a role in the measurement of the air speed that is used to control the streamlining response. The response to a combination of visual motion and wind is complex and can be explained by a model that incorporates a non-linear combination of the two stimuli. The use of visual and mechanosensory cues increases the strength of the streamlining response when the stimuli are present concurrently. We propose this multisensory integration will make the response more robust to transient disturbances in either modality. PMID:24019053
Aravamuthan, Bhooma R.; Angelaki, Dora E.
2012-01-01
The pedunculopontine nucleus (PPN) and central mesencephalic reticular formation (cMRF) both send projections and receive input from areas with known vestibular responses. Noting their connections with the basal ganglia, the locomotor disturbances that occur following lesions of the PPN or cMRF, and the encouraging results of PPN deep brain stimulation in Parkinson’s disease patients, both the PPN and cMRF have been linked to motor control. In order to determine the existence of and characterize vestibular responses in the PPN and cMRF, we recorded single neurons from both structures during vertical and horizontal rotation, translation, and visual pursuit stimuli. The majority of PPN cells (72.5%) were vestibular-only cells that responded exclusively to rotation and translation stimuli but not visual pursuit. Visual pursuit responses were much more prevalent in the cMRF (57.1%) though close to half of cMRF cells were vestibular-only cells (41.1%). Directional preferences also differed between the PPN, which was preferentially modulated during nose-down pitch, and cMRF, which was preferentially modulated during ipsilateral yaw rotation. Finally, amplitude responses were similar between the PPN and cMRF during rotation and pursuit stimuli, but PPN responses to translation were of higher amplitude than cMRF responses. Taken together with their connections to the vestibular circuit, these results implicate the PPN and cMRF in the processing of vestibular stimuli and suggest important roles for both in responding to motion perturbations like falls and turns. PMID:22864184
Implied Dynamics Biases the Visual Perception of Velocity
La Scaleia, Barbara; Zago, Myrka; Moscatelli, Alessandro; Lacquaniti, Francesco; Viviani, Paolo
2014-01-01
We expand the anecdotic report by Johansson that back-and-forth linear harmonic motions appear uniform. Six experiments explore the role of shape and spatial orientation of the trajectory of a point-light target in the perceptual judgment of uniform motion. In Experiment 1, the target oscillated back-and-forth along a circular arc around an invisible pivot. The imaginary segment from the pivot to the midpoint of the trajectory could be oriented vertically downward (consistent with an upright pendulum), horizontally leftward, or vertically upward (upside-down). In Experiments 2 to 5, the target moved uni-directionally. The effect of suppressing the alternation of movement directions was tested with curvilinear (Experiment 2 and 3) or rectilinear (Experiment 4 and 5) paths. Experiment 6 replicated the upright condition of Experiment 1, but participants were asked to hold the gaze on a fixation point. When some features of the trajectory evoked the motion of either a simple pendulum or a mass-spring system, observers identified as uniform the kinematic profiles close to harmonic motion. The bias towards harmonic motion was most consistent in the upright orientation of Experiment 1 and 6. The bias disappeared when the stimuli were incompatible with both pendulum and mass-spring models (Experiments 3 to 5). The results are compatible with the hypothesis that the perception of dynamic stimuli is biased by the laws of motion obeyed by natural events, so that only natural motions appear uniform. PMID:24667578
Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin
2014-01-01
Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314
A comparison of form processing involved in the perception of biological and nonbiological movements
Thurman, Steven M.; Lu, Hongjing
2016-01-01
Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875
Cazzoli, Dario; Hopfner, Simone; Preisig, Basil; Zito, Giuseppe; Vanbellingen, Tim; Jäger, Michael; Nef, Tobias; Mosimann, Urs; Bohlhalter, Stephan; Müri, René M; Nyffeler, Thomas
2016-11-01
An impairment of the spatial deployment of visual attention during exploration of static (i.e., motionless) stimuli is a common finding after an acute, right-hemispheric stroke. However, less is known about how these deficits: (a) are modulated through naturalistic motion (i.e., without directional, specific spatial features); and, (b) evolve in the subacute/chronic post-stroke phase. In the present study, we investigated free visual exploration in three patient groups with subacute/chronic right-hemispheric stroke and in healthy subjects. The first group included patients with left visual neglect and a left visual field defect (VFD), the second patients with a left VFD but no neglect, and the third patients without neglect or VFD. Eye movements were measured in all participants while they freely explored a traffic scene without (static condition) and with (dynamic condition) naturalistic motion, i.e., cars moving from the right or left. In the static condition, all patient groups showed similar deployment of visual exploration (i.e., as measured by the cumulative fixation duration) as compared to healthy subjects, suggesting that recovery processes took place, with normal spatial allocation of attention. However, the more demanding dynamic condition with moving cars elicited different re-distribution patterns of visual attention, quite similar to those typically observed in acute stroke. Neglect patients with VFD showed a significant decrease of visual exploration in the contralesional space, whereas patients with VFD but no neglect showed a significant increase of visual exploration in the contralesional space. No differences, as compared to healthy subjects, were found in patients without neglect or VFD. These results suggest that naturalistic motion, without directional, specific spatial features, may critically influence the spatial distribution of visual attention in subacute/chronic stroke patients. Copyright © 2016 Elsevier Ltd. All rights reserved.
Eye movement instructions modulate motion illusion and body sway with Op Art
Kapoula, Zoï; Lang, Alexandre; Vernet, Marine; Locher, Paul
2015-01-01
Op Art generates illusory visual motion. It has been proposed that eye movements participate in such illusion. This study examined the effect of eye movement instructions (fixation vs. free exploration) on the sensation of motion as well as the body sway of subjects viewing Op Art paintings. Twenty-eight healthy adults in orthostatic stance were successively exposed to three visual stimuli consisting of one figure representing a cross (baseline condition) and two Op Art paintings providing sense of motion in depth—Bridget Riley’s Movements in Squares and Akiyoshi Kitaoka’s Rollers. Before their exposure to the Op Art images, participants were instructed either to fixate at the center of the image (fixation condition) or to explore the artwork (free viewing condition). Posture was measured for 30 s per condition using a body fixed sensor (accelerometer). The major finding of this study is that the two Op Art paintings induced a larger antero-posterior body sway both in terms of speed and displacement and an increased motion illusion in the free viewing condition as compared to the fixation condition. For body sway, this effect was significant for the Riley painting, while for motion illusion this effect was significant for Kitaoka’s image. These results are attributed to macro-saccades presumably occurring under free viewing instructions, and most likely to the small vergence drifts during fixations following the saccades; such movements in interaction with visual properties of each image would increase either the illusory motion sensation or the antero-posterior body sway. PMID:25859197
Heading Tuning in Macaque Area V6.
Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E
2015-12-16
Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.
Van Ombergen, Angelique; Lubeck, Astrid J; Van Rompaey, Vincent; Maes, Leen K; Stins, John F; Van de Heyning, Paul H; Wuyts, Floris L; Bos, Jelte E
2016-01-01
Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it.
Transient cardio-respiratory responses to visually induced tilt illusions
NASA Technical Reports Server (NTRS)
Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.
2000-01-01
Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Limited transfer of long-term motion perceptual learning with double training.
Liang, Ju; Zhou, Yifeng; Fahle, Manfred; Liu, Zili
2015-01-01
A significant recent development in visual perceptual learning research is the double training technique. With this technique, Xiao, Zhang, Wang, Klein, Levi, and Yu (2008) have found complete transfer in tasks that had previously been shown to be stimulus specific. The significance of this finding is that this technique has since been successful in all tasks tested, including motion direction discrimination. Here, we investigated whether or not this technique could generalize to longer-term learning, using the method of constant stimuli. Our task was learning to discriminate motion directions of random dots. The second leg of training was contrast discrimination along a new average direction of the same moving dots. We found that, although exposure of moving dots along a new direction facilitated motion direction discrimination, this partial transfer was far from complete. We conclude that, although perceptual learning is transferrable under certain conditions, stimulus specificity also remains an inherent characteristic of motion perceptual learning.
Behavioural evidence for distinct mechanisms related to global and biological motion perception.
Miller, Louisa; Agnew, Hannah C; Pilz, Karin S
2018-01-01
The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Spacelab experiments on space motion sickness
NASA Technical Reports Server (NTRS)
Oman, C. M.
1987-01-01
Recent research results from ground and flight experiments on motion sickness and space sickness conducted by the Man Vehicle Laboratory are reviewed. New tools developed include a mathematical model for motion sickness, a method for quantitative measurements of skin pallor and blush in ambulatory subjects, and a magnitude estimation technique for ratio scaling of nausea or discomfort. These have been used to experimentally study the time course of skin pallor and subjective symptoms in laboratory motion sickness. In prolonged sickness, subjects become hypersensitive to nauseogenic stimuli. Results of a Spacelab-1 flight experiment are described in which four observers documented the stimulus factors for and the symptoms/signs of space sickness. The clinical character of space sickness differs somewhat from acute laboratory motion sickness. However SL-1 findings support the view that space sickness is fundamentally a motion sickness. Symptoms were subjectively alleviated by head movement restriction, maintenance of a familiar orientation with respect to the visual environment, and wedging between or strapping onto surfaces which provided broad contact cues confirming the absence of body motion.
Spacelab experiments on space motion sickness
NASA Technical Reports Server (NTRS)
Oman, C. M.
1985-01-01
Recent research results from ground and flight experiments on motion sickness and space sickness conducted by the Man Vehicle Laboratory are reviewed. New tools developed include a mathematical model for motion sickness, a method for quantitative measurement of skin pallor and blush in ambulatory subjects, and a magnitude estimation technique for ratio scaling of nausea or discomfort. These have been used to experimentally study the time course of skin pallor and subjective symptoms in laboratory motion sickness. In prolonged sickness, subjects become hypersensitive to nauseogenic stimuli. Results of a Spacelab-1 flight experiment are described in which 4 observers documented the stimulus factors for and the symptoms/signs of space sickness. The clinical character of space sickness differs somewhat from acute laboratory motion sickness. However SL-1 findings support the view that space sickness is fundamentally a motion sickness. Symptoms were subjectively alleviated by head movement restriction, maintenance of a familiar orientation with respect to the visual environment, and wedging between or strapping onto surfaces which provided broad contact cues confirming the absence of body motion.
Spacelab experiments on space motion sickness.
Oman, C M
1987-01-01
Recent research results from ground and flight experiments on motion sickness and space sickness conducted by the Man Vehicle Laboratory are reviewed. New tools developed include a mathematical model for motion sickness, a method for quantitative measurements of skin pallor and blush in ambulatory subjects, and a magnitude estimation technique for ratio scaling of nausea or discomfort. These have been used to experimentally study the time course of skin pallor and subjective symptoms in laboratory motion sickness. In prolonged sickness, subjects become hypersensitive to nauseogenic stimuli. Results of a Spacelab-1 flight experiment are described in which four observers documented the stimulus factors for and the symptoms/signs of space sickness. The clinical character of space sickness differs somewhat from acute laboratory motion sickness. However SL-1 findings support the view that space sickness is fundamentally a motion sickness. Symptoms were subjectively alleviated by head movement restriction, maintenance of a familiar orientation with respect to the visual environment, and wedging between or strapping onto surfaces which provided broad contact cues confirming the absence of body motion.
Burnat, Kalina; Hu, Tjing-Tjing; Kossut, Małgorzata; Eysel, Ulf T; Arckens, Lutgarde
2017-09-13
Induction of a central retinal lesion in both eyes of adult mammals is a model for macular degeneration and leads to retinotopic map reorganization in the primary visual cortex (V1). Here we characterized the spatiotemporal dynamics of molecular activity levels in the central and peripheral representation of five higher-order visual areas, V2/18, V3/19, V4/21a,V5/PMLS, area 7, and V1/17, in adult cats with central 10° retinal lesions (both sexes), by means of real-time PCR for the neuronal activity reporter gene zif268. The lesions elicited a similar, permanent reduction in activity in the center of the lesion projection zone of area V1/17, V2/18, V3/19, and V4/21a, but not in the motion-driven V5/PMLS, which instead displayed an increase in molecular activity at 3 months postlesion, independent of visual field coordinates. Also area 7 only displayed decreased activity in its LPZ in the first weeks postlesion and increased activities in its periphery from 1 month onward. Therefore we examined the impact of central vision loss on motion perception using random dot kinematograms to test the capacity for form from motion detection based on direction and velocity cues. We revealed that the central retinal lesions either do not impair motion detection or even result in better performance, specifically when motion discrimination was based on velocity discrimination. In conclusion, we propose that central retinal damage leads to enhanced peripheral vision by sensitizing the visual system for motion processing relying on feedback from V5/PMLS and area 7. SIGNIFICANCE STATEMENT Central retinal lesions, a model for macular degeneration, result in functional reorganization of the primary visual cortex. Examining the level of cortical reactivation with the molecular activity marker zif268 revealed reorganization in visual areas outside V1. Retinotopic lesion projection zones typically display an initial depression in zif268 expression, followed by partial recovery with postlesion time. Only the motion-sensitive area V5/PMLS shows no decrease, and even a significant activity increase at 3 months post-retinal lesion. Behavioral tests of motion perception found no impairment and even better sensitivity to higher random dot stimulus velocities. We demonstrate that the loss of central vision induces functional mobilization of motion-sensitive visual cortex, resulting in enhanced perception of moving stimuli. Copyright © 2017 the authors 0270-6474/17/378989-11$15.00/0.
Visual cognition in disorders of consciousness: from V1 to top-down attention.
Monti, Martin M; Pickard, John D; Owen, Adrian M
2013-06-01
What is it like to be at the lower boundaries of consciousness? Disorders of consciousness such as coma, the vegetative state, and the minimally conscious state are among the most mysterious and least understood conditions of the human brain. Particularly complicated is the assessment of residual cognitive functioning and awareness for diagnostic, rehabilitative, legal, and ethical purposes. In this article, we present a novel functional magnetic resonance imaging exploration of visual cognition in a patient with a severe disorder of consciousness. This battery of tests, first developed in healthy volunteers, assesses increasingly complex transformations of visual information along a known caudal to rostral gradient from occipital to temporal cortex. In the first five levels, the battery assesses (passive) processing of light, color, motion, coherent shapes, and object categories (i.e., faces, houses). At the final level, the battery assesses the ability to voluntarily deploy visual attention in order to focus on one of two competing stimuli. In the patient, this approach revealed appropriate brain activations, undistinguishable from those seen in healthy and aware volunteers. In addition, the ability of the patient to focus one of two competing stimuli, and switch between them on command, also suggests that he retained the ability to access, to some degree, his own visual representations. Copyright © 2012 Wiley Periodicals, Inc.
Maloney, Ryan T; Watson, Tamara L; Clifford, Colin W G
2014-10-15
Anisotropies in the cortical representation of various stimulus parameters can reveal the fundamental mechanisms by which sensory properties are analysed and coded by the brain. One example is the preference for motion radial to the point of fixation (i.e. centripetal or centrifugal) exhibited in mammalian visual cortex. In two experiments, this study used functional magnetic resonance imaging (fMRI) to explore the determinants of these radial biases for motion in functionally-defined areas of human early visual cortex, and in particular their dependence upon eccentricity which has been indicated in recent reports. In one experiment, the cortical response to wide-field random dot kinematograms forming 16 different complex motion patterns (including centrifugal, centripetal, rotational and spiral motion) was measured. The response was analysed according to preferred eccentricity within four different eccentricity ranges. Response anisotropies were characterised by enhanced activity for centripetal or centrifugal patterns that changed systematically with eccentricity in visual areas V1-V3 and hV4 (but not V3A/B or V5/MT+). Responses evolved from a preference for centrifugal over centripetal patterns close to the fovea, to a preference for centripetal over centrifugal at the most peripheral region stimulated, in agreement with previous work. These effects were strongest in V2 and V3. In a second experiment, the stimuli were restricted to within narrow annuli either close to the fovea (0.75-1.88°) or further in the periphery (4.82-6.28°), in a way that preserved the local motion information available in the first experiment. In this configuration a preference for radial motion (centripetal or centrifugal) persisted but the dependence upon eccentricity disappeared. Again this was clearest in V2 and V3. A novel interpretation of the dependence upon eccentricity of motion anisotropies in early visual cortex is offered that takes into account the spatiotemporal "predictability" of the moving pattern. Such stimulus predictability, and its relationship to models of predictive coding, has found considerable support in recent years in accounting for a number of other perceptual and neural phenomena. Copyright © 2014 Elsevier Inc. All rights reserved.
Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A
2015-07-08
Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.
Perception of Elasticity in the Kinetic Illusory Object with Phase Differences in Inducer Motion
Masuda, Tomohiro; Sato, Kazuki; Murakoshi, Takuma; Utsumi, Ken; Kimura, Atsushi; Shirai, Nobu; Kanazawa, So; Yamaguchi, Masami K.; Wada, Yuji
2013-01-01
Background It is known that subjective contours are perceived even when a figure involves motion. However, whether this includes the perception of rigidity or deformation of an illusory surface remains unknown. In particular, since most visual stimuli used in previous studies were generated in order to induce illusory rigid objects, the potential perception of material properties such as rigidity or elasticity in these illusory surfaces has not been examined. Here, we elucidate whether the magnitude of phase difference in oscillation influences the visual impressions of an object's elasticity (Experiment 1) and identify whether such elasticity perceptions are accompanied by the shape of the subjective contours, which can be assumed to be strongly correlated with the perception of rigidity (Experiment 2). Methodology/Principal Findings In Experiment 1, the phase differences in the oscillating motion of inducers were controlled to investigate whether they influenced the visual impression of an illusory object's elasticity. The results demonstrated that the impression of the elasticity of an illusory surface with subjective contours was systematically flipped with the degree of phase difference. In Experiment 2, we examined whether the subjective contours of a perceived object appeared linear or curved using multi-dimensional scaling analysis. The results indicated that the contours of a moving illusory object were perceived as more curved than linear in all phase-difference conditions. Conclusions/Significance These findings suggest that the phase difference in an object's motion is a significant factor in the material perception of motion-related elasticity. PMID:24205281
The neurophysiology of biological motion perception in schizophrenia
Jahshan, Carol; Wynn, Jonathan K; Mathis, Kristopher I; Green, Michael F
2015-01-01
Introduction The ability to recognize human biological motion is a fundamental aspect of social cognition that is impaired in people with schizophrenia. However, little is known about the neural substrates of impaired biological motion perception in schizophrenia. In the current study, we assessed event-related potentials (ERPs) to human and nonhuman movement in schizophrenia. Methods Twenty-four subjects with schizophrenia and 18 healthy controls completed a biological motion task while their electroencephalography (EEG) was simultaneously recorded. Subjects watched clips of point-light animations containing 100%, 85%, or 70% biological motion, and were asked to decide whether the clip resembled human or nonhuman movement. Three ERPs were examined: P1, N1, and the late positive potential (LPP). Results Behaviorally, schizophrenia subjects identified significantly fewer stimuli as human movement compared to healthy controls in the 100% and 85% conditions. At the neural level, P1 was reduced in the schizophrenia group but did not differ among conditions in either group. There were no group differences in N1 but both groups had the largest N1 in the 70% condition. There was a condition × group interaction for the LPP: Healthy controls had a larger LPP to 100% versus 85% and 70% biological motion; there was no difference among conditions in schizophrenia subjects. Conclusions Consistent with previous findings, schizophrenia subjects were impaired in their ability to recognize biological motion. The EEG results showed that biological motion did not influence the earliest stage of visual processing (P1). Although schizophrenia subjects showed the same pattern of N1 results relative to healthy controls, they were impaired at a later stage (LPP), reflecting a dysfunction in the identification of human form in biological versus nonbiological motion stimuli. PMID:25722951
Effects of set-size and selective spatial attention on motion processing.
Dobkins, K R; Bosworth, R G
2001-05-01
In order to investigate the effects of divided attention and selective spatial attention on motion processing, we obtained direction-of-motion thresholds using a stochastic motion display under various attentional manipulations and stimulus durations (100-600 ms). To investigate divided attention, we compared motion thresholds obtained when a single motion stimulus was presented in the visual field (set-size=1) to those obtained when the motion stimulus was presented amongst three confusable noise distractors (set-size=4). The magnitude of the observed detriment in performance with an increase in set-size from 1 to 4 could be accounted for by a simple decision model based on signal detection theory, which assumes that attentional resources are not limited in capacity. To investigate selective attention, we compared motion thresholds obtained when a valid pre-cue alerted the subject to the location of the to-be-presented motion stimulus to those obtained when no pre-cue was provided. As expected, the effect of pre-cueing was large when the visual field contained noise distractors, an effect we attribute to "noise reduction" (i.e. the pre-cue allows subjects to exclude irrelevant distractors that would otherwise impair performance). In the single motion stimulus display, we found a significant benefit of pre-cueing only at short durations (< or =150 ms), a result that can potentially be explained by a "time-to-orient" hypothesis (i.e. the pre-cue improves performance by eliminating the time it takes to orient attention to a peripheral stimulus at its onset, thereby increasing the time spent processing the stimulus). Thus, our results suggest that the visual motion system can analyze several stimuli simultaneously without limitations on sensory processing per se, and that spatial pre-cueing serves to reduce the effects of distractors and perhaps increase the effective processing time of the stimulus.
Phillipou, Andrea; Rossell, Susan Lee; Gurvich, Caroline; Castle, David Jonathan; Troje, Nikolaus Friedrich; Abel, Larry Allen
2016-03-01
Anorexia nervosa (AN) is a psychiatric condition characterised by a distortion of body image. However, whether individuals with AN can accurately perceive the size of other individuals' bodies is unclear. In the current study, 24 women with AN and 24 healthy control participants undertook two biological motion tasks while eyetracking was performed: to identify the gender and to indicate the walkers' body size. Anorexia nervosa participants tended to 'hyperscan' stimuli but did not demonstrate differences in how visual attention was directed to different body areas, relative to controls. Groups also did not differ in their estimation of body size. The hyperscanning behaviours suggest increased anxiety to disorder-relevant stimuli in AN. The lack of group difference in the estimation of body size suggests that the AN group was able to judge the body size of others accurately. The findings are discussed in terms of body image distortion specific to oneself in AN. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.
Feature-based attentional modulations in the absence of direct visual stimulation.
Serences, John T; Boynton, Geoffrey M
2007-07-19
When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.
Contrast gain control in first- and second-order motion perception.
Lu, Z L; Sperling, G
1996-12-01
A novel pedestal-plus-test paradigm is used to determine the nonlinear gain-control properties of the first-order (luminance) and the second-order (texture-contrast) motion systems, that is, how these systems' responses to motion stimuli are reduced by pedestals and other masking stimuli. Motion-direction thresholds were measured for test stimuli consisting of drifting luminance and texture-contrast-modulation stimuli superimposed on pedestals of various amplitudes. (A pedestal is a static sine-wave grating of the same type and same spatial frequency as the moving test grating.) It was found that first-order motion-direction thresholds are unaffected by small pedestals, but at pedestal contrasts above 1-2% (5-10 x pedestal threshold), motion thresholds increase proportionally to pedestal amplitude (a Weber law). For first-order stimuli, pedestal masking is specific to the spatial frequency of the test. On the other hand, motion-direction thresholds for texture-contrast stimuli are independent of pedestal amplitude (no gain control whatever) throughout the accessible pedestal amplitude range (from 0 to 40%). However, when baseline carrier contrast increases (with constant pedestal modulation amplitude), motion thresholds increase, showing that gain control in second-order motion is determined not by the modulator (as in first-order motion) but by the carrier. Note that baseline contrast of the carrier is inherently independent of spatial frequency of the modulator. The drastically different gain-control properties of the two motion systems and prior observations of motion masking and motion saturation are all encompassed in a functional theory. The stimulus inputs to both first- and second-order motion process are normalized by feedforward, shunting gain control. The different properties arise because the modulator is used to control the first-order gain and the carrier is used to control the second-order gain.
Harston, George W. J.; Kilburn-Toppin, Fleur; Matheson, Thomas; Burrows, Malcolm; Gabbiani, Fabrizio; Krapp, Holger G.
2010-01-01
Desert locusts (Schistocerca gregaria) can transform reversibly between the swarming gregarious phase and a solitarious phase, which avoids other locusts. This transformation entails dramatic changes in morphology, physiology, and behavior. We have used the lobula giant movement detector (LGMD) and its postsynaptic target, the descending contralateral movement detector (DCMD), which are visual interneurons that detect looming objects, to analyze how differences in the visual ecology of the two phases are served by altered neuronal function. Solitarious locusts had larger eyes and a greater degree of binocular overlap than those of gregarious locusts. The receptive field to looming stimuli had a large central region of nearly equal response spanning 120° × 60° in both phases. The DCMDs of gregarious locusts responded more strongly than solitarious locusts and had a small caudolateral focus of even further sensitivity. More peripherally, the response was reduced in both phases, particularly ventrally, with gregarious locusts showing greater proportional decrease. Gregarious locusts showed less habituation to repeated looming stimuli along the eye equator than did solitarious locusts. By contrast, in other parts of the receptive field the degree of habituation was similar in both phases. The receptive field organization to looming stimuli contrasts strongly with the receptive field organization of the same neurons to nonlooming local-motion stimuli, which show much more pronounced regional variation. The DCMDs of both gregarious and solitarious locusts are able to detect approaching objects from across a wide expanse of visual space, but phase-specific changes in the spatiotemporal receptive field are linked to lifestyle changes. PMID:19955292
Aravamuthan, B R; Angelaki, D E
2012-10-25
The pedunculopontine nucleus (PPN) and central mesencephalic reticular formation (cMRF) both send projections and receive input from areas with known vestibular responses. Noting their connections with the basal ganglia, the locomotor disturbances that occur following lesions of the PPN or cMRF, and the encouraging results of PPN deep brain stimulation in Parkinson's disease patients, both the PPN and cMRF have been linked to motor control. In order to determine the existence of and characterize vestibular responses in the PPN and cMRF, we recorded single neurons from both structures during vertical and horizontal rotation, translation, and visual pursuit stimuli. The majority of PPN cells (72.5%) were vestibular-only (VO) cells that responded exclusively to rotation and translation stimuli but not visual pursuit. Visual pursuit responses were much more prevalent in the cMRF (57.1%) though close to half of cMRF cells were VO cells (41.1%). Directional preferences also differed between the PPN, which was preferentially modulated during nose-down pitch, and cMRF, which was preferentially modulated during ipsilateral yaw rotation. Finally, amplitude responses were similar between the PPN and cMRF during rotation and pursuit stimuli, but PPN responses to translation were of higher amplitude than cMRF responses. Taken together with their connections to the vestibular circuit, these results implicate the PPN and cMRF in the processing of vestibular stimuli and suggest important roles for both in responding to motion perturbations like falls and turns. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Perceptual learning modifies untrained pursuit eye movements.
Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa
2014-07-07
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.
Perceptual learning modifies untrained pursuit eye movements
Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa
2014-01-01
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412
Ida, Hirofumi; Fukuhara, Kazunobu; Kusubori, Seiji; Ishii, Motonobu
2011-09-01
Computer graphics of digital human models can be used to display human motions as visual stimuli. This study presents our technique for manipulating human motion with a forward kinematics calculation without violating anatomical constraints. A motion modulation of the upper extremity was conducted by proportionally modulating the anatomical joint angular velocity calculated by motion analysis. The effect of this manipulation was examined in a tennis situation--that is, the receiver's performance of predicting ball direction when viewing a digital model of the server's motion derived by modulating the angular velocities of the forearm or that of the elbow during the forward swing. The results showed that the faster the server's forearm pronated, the more the receiver's anticipation of the ball direction tended to the left side of the serve box. In contrast, the faster the server's elbow extended, the more the receiver's anticipation of the ball direction tended to the right. This suggests that tennis players are sensitive to the motion modulation of their opponent's racket-arm.
Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations
Friston, Karl J.; Kleinschmidt, Andreas
2010-01-01
Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se. PMID:20369004
Synaptic Correlates of Low-Level Perception in V1.
Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves
2016-04-06
The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed to require attention-related processes, such as top-down feedback from higher cortical areas. Using intracellular techniques in the anesthetized cat and novel analysis methods, we reveal unexpected structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. These structural-functional biases provide a substrate, within V1, for contour detection and, more unexpectedly, global motion flow sensitivity at saccadic speed, even in the absence of attentional processes. We argue for the concept of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy changes with retinal flow statistics, and more generally for a renewed focus on intracortical computation. Copyright © 2016 the authors 0270-6474/16/363925-18$15.00/0.
Suppressive mechanisms in visual motion processing: From perception to intelligence.
Tadin, Duje
2015-10-01
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806
Bottlenecks of motion processing during a visual glance: the leaky flask model.
Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P
2013-01-01
Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.
Computational model for perception of objects and motions.
Yang, WenLu; Zhang, LiQing; Ma, LiBo
2008-06-01
Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.
Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.
McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth
2015-07-15
A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.
Macaque Parieto-Insular Vestibular Cortex: Responses to self-motion and optic flow
Chen, Aihua; DeAngelis, Gregory C.; Angelaki, Dora E.
2011-01-01
The parieto-insular vestibular cortex (PIVC) is thought to contain an important representation of vestibular information. Here we describe responses of macaque PIVC neurons to three-dimensional (3D) vestibular and optic flow stimulation. We found robust vestibular responses to both translational and rotational stimuli in the retroinsular (Ri) and adjacent secondary somatosensory (S2) cortices. PIVC neurons did not respond to optic flow stimulation, and vestibular responses were similar in darkness and during visual fixation. Cells in the upper bank and tip of the lateral sulcus (Ri and S2) responded to sinusoidal vestibular stimuli with modulation at the first harmonic frequency, and were directionally tuned. Cells in the lower bank of the lateral sulcus (mostly Ri) often modulated at the second harmonic frequency, and showed either bimodal spatial tuning or no tuning at all. All directions of 3D motion were represented in PIVC, with direction preferences distributed roughly uniformly for translation, but showing a preference for roll rotation. Spatio-temporal profiles of responses to translation revealed that half of PIVC cells followed the linear velocity profile of the stimulus, one-quarter carried signals related to linear acceleration (in the form of two peaks of direction selectivity separated in time), and a few neurons followed the derivative of linear acceleration (jerk). In contrast, mainly velocity-coding cells were found in response to rotation. Thus, PIVC comprises a large functional region in macaque areas Ri and S2, with robust responses to 3D rotation and translation, but is unlikely to play a significant role in visual/vestibular integration for self-motion perception. PMID:20181599
Wilaiprasitporn, Theerawit; Yagi, Tohru
2015-01-01
This research demonstrates the orientation-modulated attention effect on visual evoked potential. We combined this finding with our previous findings about the motion-modulated attention effect and used the result to develop novel visual stimuli for a personal identification number (PIN) application based on a brain-computer interface (BCI) framework. An electroencephalography amplifier with a single electrode channel was sufficient for our application. A computationally inexpensive algorithm and small datasets were used in processing. Seven healthy volunteers participated in experiments to measure offline performance. Mean accuracy was 83.3% at 13.9 bits/min. Encouraged by these results, we plan to continue developing the BCI-based personal identification application toward real-time systems.
Visual Prediction Error Spreads Across Object Features in Human Visual Cortex
Summerfield, Christopher; Egner, Tobias
2016-01-01
Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Attention is required for maintenance of feature binding in visual working memory
Heider, Maike; Husain, Masud
2013-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory. PMID:24266343
Attention is required for maintenance of feature binding in visual working memory.
Zokaei, Nahid; Heider, Maike; Husain, Masud
2014-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory-but not necessarily other aspects of working memory.
Van Rompaey, Vincent; Maes, Leen K.; Stins, John F.; Van de Heyning, Paul H.
2016-01-01
Background Vestibular patients occasionally report aggravation or triggering of their symptoms by visual stimuli, which is called visual vestibular mismatch (VVM). These patients therefore experience discomfort, disorientation, dizziness and postural unsteadiness. Objective Firstly, we aimed to get a better insight in the underlying mechanism of VVM by examining perceptual and postural symptoms. Secondly, we wanted to investigate whether roll-motion is a necessary trait to evoke these symptoms or whether a complex but stationary visual pattern equally provokes them. Methods Nine VVM patients and healthy matched control group were examined by exposing both groups to a stationary stimulus as well as an optokinetic stimulus rotating around the naso-occipital axis for a prolonged period of time. Subjective visual vertical (SVV) measurements, posturography and relevant questionnaires were assessed. Results No significant differences between both groups were found for SVV measurements. Patients always swayed more and reported more symptoms than healthy controls. Prolonged exposure to roll-motion caused in patients and controls an increase in postural sway and symptoms. However, only VVM patients reported significantly more symptoms after prolonged exposure to the optokinetic stimulus compared to scores after exposure to a stationary stimulus. Conclusions VVM patients differ from healthy controls in postural and subjective symptoms and motion is a crucial factor in provoking these symptoms. A possible explanation could be a central visual-vestibular integration deficit, which has implications for diagnostics and clinical rehabilitation purposes. Future research should focus on the underlying central mechanism of VVM and the effectiveness of optokinetic stimulation in resolving it. PMID:27128970
Adaptability and specificity of inhibition processes in distractor-induced blindness.
Winther, Gesche N; Niedeggen, Michael
2017-12-01
In a rapid serial visual presentation task, inhibition processes cumulatively impair processing of a target possessing distractor properties. This phenomenon-known as distractor-induced blindness-has thus far only been elicited using dynamic visual features, such as motion and orientation changes. In three ERP experiments, we used a visual object feature-color-to test for the adaptability and specificity of the effect. In Experiment I, participants responded to a color change (target) in the periphery whose onset was signaled by a central cue. Presentation of irrelevant color changes prior to the cue (distractors) led to reduced target detection, accompanied by a frontal ERP negativity that increased with increasing number of distractors, similar to the effects previously found for dynamic targets. This suggests that distractor-induced blindness is adaptable to color features. In Experiment II, the target consisted of coherent motion contrasting the color distractors. Correlates of distractor-induced blindness were found neither in the behavioral nor in the ERP data, indicating a feature specificity of the process. Experiment III confirmed the strict distinction between congruent and incongruent distractors: A single color distractor was embedded in a stream of motion distractors with the target consisting of a coherent motion. While behavioral performance was affected by the distractors, the color distractor did not elicit a frontal negativity. The experiments show that distractor-induced blindness is also triggered by visual stimuli predominantly processed in the ventral stream. The strict specificity of the central inhibition process also applies to these stimulus features. © 2017 Society for Psychophysiological Research.
Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig
2016-01-01
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data. PMID:27891074
Frid, Emma; Bresin, Roberto; Alborno, Paolo; Elblaus, Ludvig
2016-01-01
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3-4 children were simultaneously tracked and sonified, producing 3-4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.
Sysoeva, Olga V.; Galuta, Ilia A.; Davletshina, Maria S.; Orekhova, Elena V.; Stroganova, Tatiana A.
2017-01-01
Excitation/Inhibition (E/I) imbalance in neural networks is now considered among the core neural underpinnings of autism psychopathology. In motion perception at least two phenomena critically depend on E/I balance in visual cortex: spatial suppression (SS), and spatial facilitation (SF) corresponding to impoverished or improved motion perception with increasing stimuli size, respectively. While SS is dominant at high contrast, SF is evident for low contrast stimuli, due to the prevalence of inhibitory contextual modulations in the former, and excitatory ones in the latter case. Only one previous study (Foss-Feig et al., 2013) investigated SS and SF in Autism Spectrum Disorder (ASD). Our study aimed to replicate previous findings, and to explore the putative contribution of deficient inhibitory influences into an enhanced SF index in ASD—a cornerstone for interpretation proposed by Foss-Feig et al. (2013). The SS and SF were examined in 40 boys with ASD, broad spectrum of intellectual abilities (63 < IQ < 127) and 44 typically developing (TD) boys, aged 6–15 years. The stimuli of small (1°) and large (12°) radius were presented under high (100%) and low (1%) contrast conditions. Social Responsiveness Scale and Sensory Profile Questionnaire were used to assess the autism severity and sensory processing abnormalities. We found that the SS index was atypically reduced, while SF index abnormally enhanced in children with ASD. The presence of abnormally enhanced SF in children with ASD was the only consistent finding between our study and that of Foss-Feig et al. While the SS and SF indexes were strongly interrelated in TD participants, this correlation was absent in their peers with ASD. In addition, the SF index but not the SS index correlated with the severity of autism and the poor registration abilities. The pattern of results is partially consistent with the idea of hypofunctional inhibitory transmission in visual areas in ASD. Nonetheless, the absence of correlation between SF and SS indexes paired with a strong direct link between abnormally enhanced SF and autism symptoms in our ASD sample emphasizes the role of the enhanced excitatory influences by themselves in the observed abnormalities in low-level visual phenomena found in ASD. PMID:28405183
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
Kooiker, M J G; Pel, J J M; van der Steen, J
2014-06-01
Children with visual impairments are very heterogeneous in terms of the extent of visual and developmental etiology. The aim of the present study was to investigate a possible correlation between prevalence of clinical risk factors of visual processing impairments and characteristics of viewing behavior. We tested 149 children with visual information processing impairments (90 boys, 59 girls; mean age (SD)=7.3 (3.3)) and 127 children without visual impairments (63 boys and 64 girls, mean age (SD)=7.9 (2.8)). Visual processing impairments were classified based on the time it took to complete orienting responses to various visual stimuli (form, contrast, motion detection, motion coherence, color and a cartoon). Within the risk group, children were divided into a fast, medium or slow group based on the response times to a highly salient stimulus. The relationship between group specific response times and clinical risk factors was assessed. The fast responding children in the risk group were significantly slower than children in the control group. Within the risk group, the prevalence of cerebral visual impairment, brain damage and intellectual disabilities was significantly higher in slow responding children compared to faster responding children. The presence of nystagmus, perceptual dysfunctions, mean visual acuity and mean age did not significantly differ between the subgroups. Orienting responses are related to risk factors for visual processing impairments known to be prevalent in visual rehabilitation practice. The proposed method may contribute to assessing the effectiveness of visual information processing in children. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reversed stereo depth and motion direction with anti-correlated stimuli.
Read, J C; Eagle, R A
2000-01-01
We used anti-correlated stimuli to compare the correspondence problem in stereo and motion. Subjects performed a two-interval forced-choice disparity/motion direction discrimination task for different displacements. For anti-correlated 1d band-pass noise, we found weak reversed depth and motion. With 2d anti-correlated stimuli, stereo performance was impaired, but the perception of reversed motion was enhanced. We can explain the main features of our data in terms of channels tuned to different spatial frequencies and orientation. We suggest that a key difference between the solution of the correspondence problem by the motion and stereo systems concerns the integration of information at different orientations.
Dynamics of the functional link between area MT LFPs and motion detection
Smith, Jackson E. T.; Beliveau, Vincent; Schoen, Alan; Remz, Jordana; Zhan, Chang'an A.
2015-01-01
The evolution of a visually guided perceptual decision results from multiple neural processes, and recent work suggests that signals with different neural origins are reflected in separate frequency bands of the cortical local field potential (LFP). Spike activity and LFPs in the middle temporal area (MT) have a functional link with the perception of motion stimuli (referred to as neural-behavioral correlation). To cast light on the different neural origins that underlie this functional link, we compared the temporal dynamics of the neural-behavioral correlations of MT spikes and LFPs. Wide-band activity was simultaneously recorded from two locations of MT from monkeys performing a threshold, two-stimuli, motion pulse detection task. Shortly after the motion pulse occurred, we found that high-gamma (100–200 Hz) LFPs had a fast, positive correlation with detection performance that was similar to that of the spike response. Beta (10–30 Hz) LFPs were negatively correlated with detection performance, but their dynamics were much slower, peaked late, and did not depend on stimulus configuration or reaction time. A late change in the correlation of all LFPs across the two recording electrodes suggests that a common input arrived at both MT locations prior to the behavioral response. Our results support a framework in which early high-gamma LFPs likely reflected fast, bottom-up, sensory processing that was causally linked to perception of the motion pulse. In comparison, late-arriving beta and high-gamma LFPs likely reflected slower, top-down, sources of neural-behavioral correlation that originated after the perception of the motion pulse. PMID:25948867
Effects of Vertical Direction and Aperture Size on the Perception of Visual Acceleration.
Mueller, Alexandra S; González, Esther G; McNorgan, Chris; Steinbach, Martin J; Timney, Brian
2016-02-06
It is not well understood whether the distance over which moving stimuli are visible affects our sensitivity to the presence of acceleration or our ability to track such stimuli. It is also uncertain whether our experience with gravity creates anisotropies in how we detect vertical acceleration and deceleration. To address these questions, we varied the vertical extent of the aperture through which we presented vertically accelerating and decelerating random dot arrays. We hypothesized that observers would better detect and pursue accelerating and decelerating stimuli that extend over larger than smaller distances. In Experiment 1, we tested the effects of vertical direction and aperture size on acceleration and deceleration detection accuracy. Results indicated that detection is better for downward motion and for large apertures, but there is no difference between vertical acceleration and deceleration detection. A control experiment revealed that our manipulation of vertical aperture size affects the ability to track vertical motion. Smooth pursuit is better (i.e., with higher peak velocities) for large apertures than for small apertures. Our findings suggest that the ability to detect vertical acceleration and deceleration varies as a function of the direction and vertical extent over which an observer can track the moving stimulus. © The Author(s) 2016.
Alert Response to Motion Onset in the Retina
Chen, Eric Y.; Marre, Olivier; Fisher, Clark; Schwartz, Greg; Levy, Joshua; da Silveira, Rava Azeredo
2013-01-01
Previous studies have shown that motion onset is very effective at capturing attention and is more salient than smooth motion. Here, we find that this salience ranking is present already in the firing rate of retinal ganglion cells. By stimulating the retina with a bar that appears, stays still, and then starts moving, we demonstrate that a subset of salamander retinal ganglion cells, fast OFF cells, responds significantly more strongly to motion onset than to smooth motion. We refer to this phenomenon as an alert response to motion onset. We develop a computational model that predicts the time-varying firing rate of ganglion cells responding to the appearance, onset, and smooth motion of a bar. This model, termed the adaptive cascade model, consists of a ganglion cell that receives input from a layer of bipolar cells, represented by individual rectified subunits. Additionally, both the bipolar and ganglion cells have separate contrast gain control mechanisms. This model captured the responses to our different motion stimuli over a wide range of contrasts, speeds, and locations. The alert response to motion onset, together with its computational model, introduces a new mechanism of sophisticated motion processing that occurs early in the visual system. PMID:23283327
Training in Contrast Detection Improves Motion Perception of Sinewave Gratings in Amblyopia
Hou, Fang; Huang, Chang-bing; Tao, Liming; Feng, Lixia; Zhou, Yifeng; Lu, Zhong-Lin
2011-01-01
Purpose. One critical concern about using perceptual learning to treat amblyopia is whether training with one particular stimulus and task generalizes to other stimuli and tasks. In the spatial domain, it has been found that the bandwidth of contrast sensitivity improvement is much broader in amblyopes than in normals. Because previous studies suggested the local motion deficits in amblyopia are explained by the spatial vision deficits, the hypothesis for this study was that training in the spatial domain could benefit motion perception of sinewave gratings. Methods. Nine adult amblyopes (mean age, 22.1 ± 5.6 years) were trained in a contrast detection task in the amblyopic eye for 10 days. Visual acuity, spatial contrast sensitivity functions, and temporal modulation transfer functions (MTF) for sinewave motion detection and discrimination were measured for each eye before and after training. Eight adult amblyopes (mean age, 22.6 ± 6.7 years) served as control subjects. Results. In the amblyopic eye, training improved (1) contrast sensitivity by 6.6 dB (or 113.8%) across spatial frequencies, with a bandwidth of 4.4 octaves; (2) sensitivity of motion detection and discrimination by 3.2 dB (or 44.5%) and 3.7 dB (or 53.1%) across temporal frequencies, with bandwidths of 3.9 and 3.1 octaves, respectively; (3) visual acuity by 3.2 dB (or 44.5%). The fellow eye also showed a small amount of improvement in contrast sensitivities and no significant change in motion perception. Control subjects who received no training demonstrated no obvious improvement in any measure. Conclusions. The results demonstrate substantial plasticity in the amblyopic visual system, and provide additional empirical support for perceptual learning as a potential treatment for amblyopia. PMID:21693615
Kennedy, R S; Hettinger, L J; Harm, D L; Ordy, J M; Dunlap, W P
1996-01-01
Vection (V) refers to the compelling visual illusion of self-motion experienced by stationary individuals when viewing moving visual surrounds. The phenomenon is of theoretical interest because of its relevance for understanding the neural basis of ordinary self-motion perception, and of practical importance because it is the experience that makes simulation, virtual reality displays, and entertainment devices more vicarious. This experiment was performed to address whether an optokinetically induced vection illusion exhibits monotonic and stable psychometric properties and whether individuals differ reliably in these (V) perceptions. Subjects were exposed to varying velocities of the circular vection (CV) display in an optokinetic (OKN) drum 2 meters in diameter in 5 one-hour daily sessions extending over a 1 week period. For grouped data, psychophysical scalings of velocity estimates showed that exponents in a Stevens' type power function were essentially linear (slope = 0.95) and largely stable over sessions. Latencies were slightly longer for the slowest and fastest induction stimuli, and the trend over sessions for average latency was longer as a function of practice implying time course adaptation effects. Test-retest reliabilities for individual slope and intercept measures were moderately strong (r = 0.45) and showed no evidence of superdiagonal form. This implies stability of the individual circularvection (CV) sensitivities. Because the individual CV scores were stable, reliabilities were improved by averaging 4 sessions in order to provide a stronger retest reliability (r = 0.80). Individual latency responses were highly reliable (r = 0.80). Mean CV latency and motion sickness symptoms were greater in males than in females. These individual differences in CV could be predictive of other outcomes, such as susceptibility to disorientation or motion sickness, and for CNS localization of visual-vestibular interactions in the experience of self-motion.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1976-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System
Abdul-Kreem, Luma Issa; Neumann, Heiko
2015-01-01
The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells. PMID:26554589
Mismatched summation mechanisms in older adults for the perception of small moving stimuli.
McDougall, Thomas J; Nguyen, Bao N; McKendrick, Allison M; Badcock, David R
2018-01-01
Previous studies have found evidence for reduced cortical inhibition in aging visual cortex. Reduced inhibition could plausibly increase the spatial area of excitation in receptive fields of older observers, as weaker inhibitory processes would allow the excitatory receptive field to dominate and be psychophysically measureable over larger areas. Here, we investigated aging effects on spatial summation of motion direction using the Battenberg summation method, which aims to control the influence of locally generated internal noise changes by holding overall display size constant. This method produces more accurate estimates of summation area than conventional methods that simply increase overall stimulus dimensions. Battenberg stimuli have a checkerboard arrangement, where check size (luminance-modulated drifting gratings alternating with mean luminance areas), but not display size, is varied and compared with performance for a full field stimulus to provide a measure of summation. Motion direction discrimination thresholds, where contrast was the dependent variable, were measured in 14 younger (24-34 years) and 14 older (62-76 years) adults. Older observers were less sensitive for all check sizes, but the relative sensitivity across sizes, also differed between groups. In the older adults, the full field stimulus offered smaller performance improvements compared to that for younger adults, specifically for the small checked Battenberg stimuli. This suggests aging impacts on short-range summation mechanisms, potentially underpinned by larger summation areas for the perception of small moving stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lemasson, B H; Anderson, J J; Goodwin, R A
2009-12-21
We explore mechanisms associated with collective animal motion by drawing on the neurobiological bases of sensory information processing and decision-making. The model uses simplified retinal processes to translate neighbor movement patterns into information through spatial signal integration and threshold responses. The structure provides a mechanism by which individuals can vary their sets of influential neighbors, a measure of an individual's sensory load. Sensory loads are correlated with group order and density, and we discuss their adaptive values in an ecological context. The model also provides a mechanism by which group members can identify, and rapidly respond to, novel visual stimuli.
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M
2008-09-24
Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1973-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Wöllner, Clemens; Deconinck, Frederik J A
2013-05-01
Gender recognition in point-light displays was investigated with regard to body morphology cues and motion cues of human motion performed with different levels of technical skill. Gestures of male and female orchestral conductors were recorded with a motion capture system while they conducted excerpts from a Mendelssohn string symphony to musicians. Point-light displays of conductors were presented to observers under the following conditions: visual-only, auditory-only, audiovisual, and two non-conducting conditions (walking and static images). Observers distinguished between male and female conductors in gait and static images, but not in visual-only and auditory-only conducting conditions. Across all conductors, gender recognition for audiovisual stimuli was better than chance, yet significantly less reliable than for gait. Separate analyses for two groups of conductors indicated an expertise effect in that novice conductors' gender was perceived above chance level for visual-only and audiovisual conducting, while skilled conducting gestures of experts did not afford gender-specific cues. In these conditions, participants may have ignored the body morphology cues that led to correct judgments for static images. Results point to a response bias such that conductors were more often judged to be male. Thus judgment accuracy depended both on the conductors' level of expertise as well as on the observers' concepts, suggesting that perceivable differences between men and women may diminish for highly trained movements of experienced individuals. Copyright © 2013 Elsevier B.V. All rights reserved.
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio
2016-01-01
Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lin, Zhicheng
2013-11-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.
Near-field visual acuity of pigeons: effects of head location and stimulus luminance.
Hodos, W; Leibowitz, R W; Bonbright, J C
1976-03-01
Two pigeons were trained to discriminate a grating stimulus from a blank stimulus of equivalent luminance in a three-key chamber. The stimuli and blanks were presented behind a transparent center key. The procedure was a conditional discrimination in which pecks on the left key were reinforced if the blank had been present behind the center key and pecks on the right key were reinforced if the grating had been present behind the center key. The spatial frequency of the stimuli was varied in each session from four to 29.5 lines per millimeter in accordance with a variation of the method of constant stimuli. The number of lines per millimeter that the subjects could discriminate at threshold was determined from psychometric functions. Data were collected at five values of stimulus luminance ranging from--0.07 to 3.29 log cd/m2. The distance from the stimulus to the anterior nodal point of the eye, which was determined from measurements taken from high-speed motion-picture photographs of three additional pigeons and published intraocular measurements, was 62.0 mm. This distance and the grating detection thresholds were used to calculate the visual acuity of the birds at each level of luminance. Acuity improved with increasing luminance to a peak value of 0.52, which corresponds to a visual angle of 1.92 min, at a luminance of 2.33 log cd/m2. Further increase in luminance produced a small decline in acuity.
Lin, Zhicheng
2013-01-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, human performance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. PMID:23942348
Effects of aging on identifying emotions conveyed by point-light walkers.
Spencer, Justine M Y; Sekuler, Allison B; Bennett, Patrick J; Giese, Martin A; Pilz, Karin S
2016-02-01
The visual system is able to recognize human motion simply from point lights attached to the major joints of an actor. Moreover, it has been shown that younger adults are able to recognize emotions from such dynamic point-light displays. Previous research has suggested that the ability to perceive emotional stimuli changes with age. For example, it has been shown that older adults are impaired in recognizing emotional expressions from static faces. In addition, it has been shown that older adults have difficulties perceiving visual motion, which might be helpful to recognize emotions from point-light displays. In the current study, 4 experiments were completed in which older and younger adults were asked to identify 3 emotions (happy, sad, and angry) displayed by 4 types of point-light walkers: upright and inverted normal walkers, which contained both local motion and global form information; upright scrambled walkers, which contained only local motion information; and upright random-position walkers, which contained only global form information. Overall, emotion discrimination accuracy was lower in older participants compared with younger participants, specifically when identifying sad and angry point-light walkers. In addition, observers in both age groups were able to recognize emotions from all types of point-light walkers, suggesting that both older and younger adults are able to recognize emotions from point-light walkers on the basis of local motion or global form. (c) 2016 APA, all rights reserved).
Brain activity correlates with emotional perception induced by dynamic avatars.
Goldberg, Hagar; Christensen, Andrea; Flash, Tamar; Giese, Martin A; Malach, Rafael
2015-11-15
An accurate judgment of the emotional state of others is a prerequisite for successful social interaction and hence survival. Thus, it is not surprising that we are highly skilled at recognizing the emotions of others. Here we aimed to examine the neuronal correlates of emotion recognition from gait. To this end we created highly controlled dynamic body-movement stimuli based on real human motion-capture data (Roether et al., 2009). These animated avatars displayed gait in four emotional (happy, angry, fearful, and sad) and speed-matched neutral styles. For each emotional gait and its equivalent neutral gait, avatars were displayed at five morphing levels between the two. Subjects underwent fMRI scanning while classifying the emotions and the emotional intensity levels expressed by the avatars. Our results revealed robust brain selectivity to emotional compared to neutral gait stimuli in brain regions which are involved in emotion and biological motion processing, such as the extrastriate body area (EBA), fusiform body area (FBA), superior temporal sulcus (STS), and the amygdala (AMG). Brain activity in the amygdala reflected emotional awareness: for visually identical stimuli it showed amplified stronger response when the stimulus was perceived as emotional. Notably, in avatars gradually morphed along an emotional expression axis there was a parametric correlation between amygdala activity and emotional intensity. This study extends the mapping of emotional decoding in the human brain to the domain of highly controlled dynamic biological motion. Our results highlight an extensive level of brain processing of emotional information related to body language, which relies mostly on body kinematics. Copyright © 2015. Published by Elsevier Inc.
Nalivaiko, Eugene; Davis, Simon L; Blackmore, Karen L; Vakulin, Andrew; Nesbitt, Keith V
2015-11-01
Evidence from studies of provocative motion indicates that motion sickness is tightly linked to the disturbances of thermoregulation. The major aim of the current study was to determine whether provocative visual stimuli (immersion into the virtual reality simulating rides on a rollercoaster) affect skin temperature that reflects thermoregulatory cutaneous responses, and to test whether such stimuli alter cognitive functions. In 26 healthy young volunteers wearing head-mounted display (Oculus Rift), simulated rides consistently provoked vection and nausea, with a significant difference between the two versions of simulation software (Parrot Coaster and Helix). Basal finger temperature had bimodal distribution, with low-temperature group (n=8) having values of 23-29 °C, and high-temperature group (n=18) having values of 32-36 °C. Effects of cybersickness on finger temperature depended on the basal level of this variable: in subjects from former group it raised by 3-4 °C, while in most subjects from the latter group it either did not change or transiently reduced by 1.5-2 °C. There was no correlation between the magnitude of changes in the finger temperature and nausea score at the end of simulated ride. Provocative visual stimulation caused prolongation of simple reaction time by 20-50 ms; this increase closely correlated with the subjective rating of nausea. Lastly, in subjects who experienced pronounced nausea, heart rate was elevated. We conclude that cybersickness is associated with changes in cutaneous thermoregulatory vascular tone; this further supports the idea of a tight link between motion sickness and thermoregulation. Cybersickness-induced prolongation of reaction time raises obvious concerns regarding the safety of this technology. Copyright © 2015 Elsevier Inc. All rights reserved.
Contribution of color signals to ocular following responses.
Matsuura, Kiyoto; Kawano, Kenji; Inaba, Naoko; Miura, Kenichiro
2016-10-01
Ocular following responses (OFRs) are elicited at ultra-short latencies (< 60 ms) by sudden movements of the visual scene. In this study, we investigated the roles of color signals in OFRs in monkeys. To make physiologically isoluminant sinusoidal color gratings, we estimated the physiologically isoluminant points using OFRs and found that the physiologically isoluminant points were nearly independent of the spatiotemporal frequency of the gratings. We recorded OFRs induced by the motion of physiologically isoluminant color gratings and found that OFRs elicited by the motion of color gratings had different spatiotemporal frequency tuning from those elicited by the motion of luminance gratings. Additionally, OFRs to isoluminant color gratings had smaller peak responses, suggesting that color signals weakly contribute to OFRs compared with luminance signals. OFRs to the motion of stimuli composed of luminance and color signals were also examined. We found that color signals largely contributed to OFRs under low luminance signals regardless of whether color signals moved in the same or opposite direction to luminance signals. These results provide evidence of the multichannel visual computations underlying motor responses. We conclude that, in everyday situations, color information contributes cooperatively with luminance information to the generation of ocular tracking behaviors. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Cheng, Jiao; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Bei; Wang, Xingyu; Cichocki, Andrzej
2018-02-13
Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a "zooming" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).
Effects of feature-based attention on the motion aftereffect at remote locations.
Boynton, Geoffrey M; Ciaramitaro, Vivian M; Arman, A Cyrus
2006-09-01
Previous studies have shown that attention to a particular stimulus feature, such as direction of motion or color, enhances neuronal responses to unattended stimuli sharing that feature. We studied this effect psychophysically by measuring the strength of the motion aftereffect (MAE) induced by an unattended stimulus when attention was directed to one of two overlapping fields of moving dots in a different spatial location. When attention was directed to the same direction of motion as the unattended stimulus, the unattended stimulus induced a stronger MAE than when attention was directed to the opposite direction. Also, when the unattended location contained either uncorrelated motion or had no stimulus at all an MAE was induced in the opposite direction to the attended direction of motion. The strength of the MAE was similar regardless of whether subjects attended to the speed or luminance of the attended dots. These results provide further support for a global feature-based mechanism of attention, and show that the effect spreads across all features of an attended object, and to all locations of visual space.
Perception of Social Interactions for Spatially Scrambled Biological Motion
Thurman, Steven M.; Lu, Hongjing
2014-01-01
It is vitally important for humans to detect living creatures in the environment and to analyze their behavior to facilitate action understanding and high-level social inference. The current study employed naturalistic point-light animations to examine the ability of human observers to spontaneously identify and discriminate socially interactive behaviors between two human agents. Specifically, we investigated the importance of global body form, intrinsic joint movements, extrinsic whole-body movements, and critically, the congruency between intrinsic and extrinsic motions. Motion congruency is hypothesized to be particularly important because of the constraint it imposes on naturalistic action due to the inherent causal relationship between limb movements and whole body motion. Using a free response paradigm in Experiment 1, we discovered that many naïve observers (55%) spontaneously attributed animate and/or social traits to spatially-scrambled displays of interpersonal interaction. Total stimulus motion energy was strongly correlated with the likelihood that an observer would attribute animate/social traits, as opposed to physical/mechanical traits, to the scrambled dot stimuli. In Experiment 2, we found that participants could identify interactions between spatially-scrambled displays of human dance as long as congruency was maintained between intrinsic/extrinsic movements. Violating the motion congruency constraint resulted in chance discrimination performance for the spatially-scrambled displays. Finally, Experiment 3 showed that scrambled point-light dancing animations violating this constraint were also rated as significantly less interactive than animations with congruent intrinsic/extrinsic motion. These results demonstrate the importance of intrinsic/extrinsic motion congruency for biological motion analysis, and support a theoretical framework in which early visual filters help to detect animate agents in the environment based on several fundamental constraints. Only after satisfying these basic constraints could stimuli be evaluated for high-level social content. In this way, we posit that perceptual animacy may serve as a gateway to higher-level processes that support action understanding and social inference. PMID:25406075
Reconfigurable OR and XOR logic gates based on dual responsive on-off-on micromotors
NASA Astrophysics Data System (ADS)
Dong, Yonggang; Liu, Mei; Zhang, Hui; Dong, Bin
2016-04-01
In this study, we report a hemisphere-like micromotor. Intriguingly, the micromotor exhibits controllable on-off-on motion, which can be actuated by two different external stimuli (UV and NH3). Moreover, the moving direction of the micromotor can be manipulated by the direction in which UV and NH3 are applied. As a result, the motion accelerates when both stimuli are applied in the same direction and decelerates when the application directions are opposite to each other. More interestingly, the dual stimuli responsive micromotor can be utilized as a reconfigurable logic gate with UV and NH3 as the inputs and the motion of the micromotor as the output. By controlling the direction of the external stimuli, OR and XOR dual logic functions can be realized.In this study, we report a hemisphere-like micromotor. Intriguingly, the micromotor exhibits controllable on-off-on motion, which can be actuated by two different external stimuli (UV and NH3). Moreover, the moving direction of the micromotor can be manipulated by the direction in which UV and NH3 are applied. As a result, the motion accelerates when both stimuli are applied in the same direction and decelerates when the application directions are opposite to each other. More interestingly, the dual stimuli responsive micromotor can be utilized as a reconfigurable logic gate with UV and NH3 as the inputs and the motion of the micromotor as the output. By controlling the direction of the external stimuli, OR and XOR dual logic functions can be realized. Electronic supplementary information (ESI) available: Fig. S1-S6 and Videos S1-S5. See DOI: 10.1039/c6nr00752j
Decreased visual detection during subliminal stimulation.
Bareither, Isabelle; Villringer, Arno; Busch, Niko A
2014-10-17
What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Hsu, Patty; Taylor, J Eric T; Pratt, Jay
2015-01-01
The Ternus effect is a robust illusion of motion that produces element motion at short interstimulus intervals (ISIs; < 50 ms) and group motion at longer ISIs (> 50 ms). Previous research has shown that the nature of the stimuli (e.g., similarity, grouping), not just ISI, can influence the likelihood of perceiving element or group motion. We examined if semantic knowledge can also influence what type of illusory motion is perceived. In Experiment I, we used a modified Ternus display with pictures of frogs in a jump-ready pose facing forwards or backwards to the direction of illusory motion. Participants perceived more element motion with the forward-facing frogs and more group motion with the backward-facing frogs. Experiment 2 tested whether this effect would still occur with line drawings of frogs, or if a more life-like image was necessary. Experiment 3 tested whether this effect was due to visual asymmetries inherent in the jumping pose. Experiment 4 tested whether frogs in a "non-jumping," sedentary pose would replicate the original effect. These experiments elucidate the role of semantic knowledge in the Ternus effect. Prior knowledge of the movement of certain animate objects, in this case, frogs can also bias the perception of element or group motion.
Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements.
Krukowski, Anton E; Stone, Leland S
2005-01-20
It is well established that perceptual direction discrimination shows an oblique effect; thresholds are higher for motion along diagonal directions than for motion along cardinal directions. Here, we compare simultaneous direction judgments and pursuit responses for the same motion stimuli and find that both pursuit and perceptual thresholds show similar anisotropies. The pursuit oblique effect is robust under a wide range of experimental manipulations, being largely resistant to changes in trajectory (radial versus tangential motion), speed (10 versus 25 deg/s), directional uncertainty (blocked versus randomly interleaved), and cognitive state (tracking alone versus concurrent tracking and perceptual tasks). Our data show that the pursuit oblique effect is caused by an effective expansion of direction space surrounding the cardinal directions and the requisite compression of space for other directions. This expansion suggests that the directions around the cardinal directions are in some way overrepresented in the visual cortical pathways that drive both smooth pursuit and perception.
Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements
NASA Technical Reports Server (NTRS)
Krukowski, Anton E.; Stone, Leland S.
2005-01-01
It is well established that perceptual direction discrimination shows an oblique effect; thresholds are higher for motion along diagonal directions than for motion along cardinal directions. Here, we compare simultaneous direction judgments and pursuit responses for the same motion stimuli and find that both pursuit and perceptual thresholds show similar anisotropies. The pursuit oblique effect is robust under a wide range of experimental manipulations, being largely resistant to changes in trajectory (radial versus tangential motion), speed (10 versus 25 deg/s), directional uncertainty (blocked versus randomly interleaved), and cognitive state (tracking alone versus concurrent tracking and perceptual tasks). Our data show that the pursuit oblique effect is caused by an effective expansion of direction space surrounding the cardinal directions and the requisite compression of space for other directions. This expansion suggests that the directions around the cardinal directions are in some way overrepresented in the visual cortical pathways that drive both smooth pursuit and perception.
Sequential motion of the ossicular chain measured by laser Doppler vibrometry.
Kunimoto, Yasuomi; Hasegawa, Kensaku; Arii, Shiro; Kataoka, Hideyuki; Yazama, Hiroaki; Kuya, Junko; Fujiwara, Kazunori; Takeuchi, Hiromi
2017-12-01
In order to help a surgeon make the best decision, a more objective method of measuring ossicular motion is required. A laser Doppler vibrometer was mounted on a surgical microscope. To measure ossicular chain vibrations, eight patients with cochlear implants were investigated. To assess the motions of the ossicular chain, velocities at five points were measured with tonal stimuli of 1 and 3 kHz, which yielded reproducible results. The sequential amplitude change at each point was calculated with phase shifting from the tonal stimulus. Motion of the ossicular chain was visualized from the averaged results using the graphics application. The head of the malleus and the body of the incus showed synchronized movement as one unit. In contrast, the stapes (incudostapedial joint and posterior crus) moved synchronously in opposite phase to the malleus and incus. The amplitudes at 1 kHz were almost twice those at 3 kHz. Our results show that the malleus and incus unit and the stapes move with a phase difference.
The processing of social stimuli in early infancy: from faces to biological motion perception.
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
2011-01-01
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
Visual cortical activity reflects faster accumulation of information from cortically blind fields
Martin, Tim; Das, Anasuya; Huxlin, Krystel R.
2012-01-01
Brain responses (from functional magnetic resonance imaging) and components of information processing were investigated in nine cortically blind observers performing a global direction discrimination task. Three of these subjects had responses in perilesional cortex in response to blind field stimulation, whereas the others did not. We used the EZ-diffusion model of decision making to understand how cortically blind subjects make a perceptual decision on stimuli presented within their blind field. We found that these subjects had slower accumulation of information in their blind fields as compared with their good fields and to intact controls. Within cortically blind subjects, activity in perilesional tissue, V3A and hMT+ was associated with a faster accumulation of information for deciding direction of motion of stimuli presented in the blind field. This result suggests that the rate of information accumulation is a critical factor in the degree of impairment in cortical blindness and varies greatly among affected individuals. Retraining paradigms that seek to restore visual functions might benefit from focusing on increasing the rate of information accumulation. PMID:23169923
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Wu, Ming; Nern, Aljoscha; Williamson, W Ryan; Morimoto, Mai M; Reiser, Michael B; Card, Gwyneth M; Rubin, Gerald M
2016-01-01
Visual projection neurons (VPNs) provide an anatomical connection between early visual processing and higher brain regions. Here we characterize lobula columnar (LC) cells, a class of Drosophila VPNs that project to distinct central brain structures called optic glomeruli. We anatomically describe 22 different LC types and show that, for several types, optogenetic activation in freely moving flies evokes specific behaviors. The activation phenotypes of two LC types closely resemble natural avoidance behaviors triggered by a visual loom. In vivo two-photon calcium imaging reveals that these LC types respond to looming stimuli, while another type does not, but instead responds to the motion of a small object. Activation of LC neurons on only one side of the brain can result in attractive or aversive turning behaviors depending on the cell type. Our results indicate that LC neurons convey information on the presence and location of visual features relevant for specific behaviors. DOI: http://dx.doi.org/10.7554/eLife.21022.001 PMID:28029094
Orientation-Selective Retinal Circuits in Vertebrates
Antinucci, Paride; Hindges, Robert
2018-01-01
Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such ‘orientation-selective’ neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates. PMID:29467629
Orientation-Selective Retinal Circuits in Vertebrates.
Antinucci, Paride; Hindges, Robert
2018-01-01
Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such 'orientation-selective' neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates.
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B
2004-01-01
The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Tyler, Mitchell E.; Danilov, Yuri P.; Kaczmarek, Kurt A.; Meyerand, Mary E.
2013-01-01
Abstract Some individuals with balance impairment have hypersensitivity of the motion-sensitive visual cortices (hMT+) compared to healthy controls. Previous work showed that electrical tongue stimulation can reduce the exaggerated postural sway induced by optic flow in this subject population and decrease the hypersensitive response of hMT+. Additionally, a region within the brainstem (BS), likely containing the vestibular and trigeminal nuclei, showed increased optic flow-induced activity after tongue stimulation. The aim of this study was to understand how the modulation induced by tongue stimulation affects the balance-processing network as a whole and how modulation of BS structures can influence cortical activity. Four volumes of interest, discovered in a general linear model analysis, constitute major contributors to the balance-processing network. These regions were entered into a dynamic causal modeling analysis to map the network and measure any connection or topology changes due to the stimulation. Balance-impaired individuals had downregulated response of the primary visual cortex (V1) to visual stimuli but upregulated modulation of the connection between V1 and hMT+ by visual motion compared to healthy controls (p≤1E–5). This upregulation was decreased to near-normal levels after stimulation. Additionally, the region within the BS showed increased response to visual motion after stimulation compared to both prestimulation and controls. Stimulation to the tongue enters the central nervous system at the BS but likely propagates to the cortex through supramodal information transfer. We present a model to explain these brain responses that utilizes an anatomically present, but functionally dormant pathway of information flow within the processing network. PMID:23216162
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2.
Zarella, Mark D; Ts'o, Daniel Y
2017-01-01
Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure-ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure-ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure-ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization.
Contextual modulation revealed by optical imaging exhibits figural asymmetry in macaque V1 and V2
Zarella, Mark D; Ts’o, Daniel Y
2017-01-01
Neurons in early visual cortical areas are influenced by stimuli presented well beyond the confines of their classical receptive fields, endowing them with the ability to encode fine-scale features while also having access to the global context of the visual scene. This property can potentially define a role for the early visual cortex to contribute to a number of important visual functions, such as surface segmentation and figure–ground segregation. It is unknown how extraclassical response properties conform to the functional architecture of the visual cortex, given the high degree of functional specialization in areas V1 and V2. We examined the spatial relationships of contextual activations in macaque V1 and V2 with intrinsic signal optical imaging. Using figure–ground stimulus configurations defined by orientation or motion, we found that extraclassical modulation is restricted to the cortical representations of the figural component of the stimulus. These modulations were positive in sign, suggesting a relative enhancement in neuronal activity that may reflect an excitatory influence. Orientation and motion cues produced similar patterns of activation that traversed the functional subdivisions of V2. The asymmetrical nature of the enhancement demonstrated the capacity for visual cortical areas as early as V1 to contribute to figure–ground segregation, and the results suggest that this information can be extracted from the population activity constrained only by retinotopy, and not the underlying functional organization. PMID:28761385
What you thought you knew about motion sickness isn't necessarily so
NASA Technical Reports Server (NTRS)
Cowings, P. S.; Malmstrom, F. V.
1984-01-01
Motion sickness symptoms, stimuli, and drug therapy are discussed. Autogenic feedback training (AFT) methods of preventing motion sickness are explained. Research with AFT indicates that participants who had AFT could withstand longer periods of Coriolis acceleration, participants with high or low susceptibility to motion sickness could control their symptoms with AFT, AFT for Coriolis acceleration is transferable to other motion sickness stimuli, and most people can learn AFT, though with varying rates of learning.
Sheliga, Boris M.; Quaia, Christian; FitzGibbon, Edmond J.; Cumming, Bruce G.
2016-01-01
White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion. PMID:26762277
A Simon Effect With Stationary Moving Stimuli
ERIC Educational Resources Information Center
Bosbach, Simone; Prinz, Wolfgang; Kerzel, Dirk
2004-01-01
To clarify whether motion information per se has a separable influence on action control, the authors investigated whether irrelevant direction of motion of stimuli whose overall position was constant over time would affect manual left-right responses (i.e., reveal a motion-based Simon effect). In Experiments 1 and 2, significant Simon effects…
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.
van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R
2018-05-04
Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Visual gravity cues in the interpretation of biological movements: neural correlates in humans.
Maffei, Vincenzo; Indovina, Iole; Macaluso, Emiliano; Ivanenko, Yuri P; A Orban, Guy; Lacquaniti, Francesco
2015-01-01
Our visual system takes into account the effects of Earth gravity to interpret biological motion (BM), but the neural substrates of this process remain unclear. Here we measured functional magnetic resonance (fMRI) signals while participants viewed intact or scrambled stick-figure animations of walking, running, hopping, and skipping recorded at normal or reduced gravity. We found that regions sensitive to BM configuration in the occipito-temporal cortex (OTC) were more active for reduced than normal gravity but with intact stimuli only. Effective connectivity analysis suggests that predictive coding of gravity effects underlies BM interpretation. This process might be implemented by a family of snapshot neurons involved in action monitoring. Copyright © 2014 Elsevier Inc. All rights reserved.
[The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].
Ganin, I P; Kaplan, A Ia
2014-01-01
The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.
Stephen, Ian D; Sturman, Daniel; Stevenson, Richard J; Mond, Jonathan; Brooks, Kevin R
2018-01-01
Body size misperception-the belief that one is larger or smaller than reality-affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers' level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception-a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse-while others do not.
Sturman, Daniel; Stevenson, Richard J.; Mond, Jonathan; Brooks, Kevin R.
2018-01-01
Body size misperception–the belief that one is larger or smaller than reality–affects a large and growing segment of the population. Recently, studies have shown that exposure to extreme body stimuli results in a shift in the point of subjective normality, suggesting that visual adaptation may be a mechanism by which body size misperception occurs. Yet, despite being exposed to a similar set of bodies, some individuals within a given geographical area will develop body size misperception and others will not. The reason for these individual difference is currently unknown. One possible explanation stems from the observation that women with lower levels of body satisfaction have been found to pay more attention to images of thin bodies. However, while attention has been shown to enhance visual adaptation effects in low (e.g. rotational and linear motion) and high level stimuli (e.g., facial gender), it is not known whether this effect exists in visual adaptation to body size. Here, we test the hypothesis that there is an indirect effect of body satisfaction on the direction and magnitude of the body fat adaptation effect, mediated via visual attention (i.e., selectively attending to images of thin over fat bodies or vice versa). Significant mediation effects were found in both men and women, suggesting that observers’ level of body satisfaction may influence selective visual attention to thin or fat bodies, which in turn influences the magnitude and direction of visual adaptation to body size. This may provide a potential mechanism by which some individuals develop body size misperception–a risk factor for eating disorders, compulsive exercise behaviour and steroid abuse–while others do not. PMID:29385137
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
A dendrite-autonomous mechanism for direction selectivity in retinal starburst amacrine cells.
Hausselt, Susanne E; Euler, Thomas; Detwiler, Peter B; Denk, Winfried
2007-07-01
Detection of image motion direction begins in the retina, with starburst amacrine cells (SACs) playing a major role. SACs generate larger dendritic Ca(2+) signals when motion is from their somata towards their dendritic tips than for motion in the opposite direction. To study the mechanisms underlying the computation of direction selectivity (DS) in SAC dendrites, electrical responses to expanding and contracting circular wave visual stimuli were measured via somatic whole-cell recordings and quantified using Fourier analysis. Fundamental and, especially, harmonic frequency components were larger for expanding stimuli. This DS persists in the presence of GABA and glycine receptor antagonists, suggesting that inhibitory network interactions are not essential. The presence of harmonics indicates nonlinearity, which, as the relationship between harmonic amplitudes and holding potential indicates, is likely due to the activation of voltage-gated channels. [Ca(2+)] changes in SAC dendrites evoked by voltage steps and monitored by two-photon microscopy suggest that the distal dendrite is tonically depolarized relative to the soma, due in part to resting currents mediated by tonic glutamatergic synaptic input, and that high-voltage-activated Ca(2+) channels are active at rest. Supported by compartmental modeling, we conclude that dendritic DS in SACs can be computed by the dendrites themselves, relying on voltage-gated channels and a dendritic voltage gradient, which provides the spatial asymmetry necessary for direction discrimination.
Arnold, S E J; Stevenson, P C; Belmain, S R
2015-08-01
Many insects show a greater attraction to multimodal cues, e.g. odour and colour combined, than to either cue alone. Despite the potential to apply the knowledge to improve control strategies, studies of multiple stimuli have not been undertaken for stored product pest insects. We tested orientation towards a food odour (crushed white maize) in combination with a colour cue (coloured paper with different surface spectral reflectance properties) in three storage pest beetle species, using motion tracking to monitor their behaviour. While the maize weevil, Sitophilus zeamais (Motsch.), showed attraction to both odour and colour stimuli, particularly to both cues in combination, this was not observed in the bostrichid pests Rhyzopertha dominica (F.) (lesser grain borer) or Prostephanus truncatus (Horn) (larger grain borer). The yellow stimulus was particularly attractive to S. zeamais, and control experiments showed that this was neither a result of the insects moving towards darker-coloured areas of the arena, nor their being repelled by optical brighteners in white paper. Visual stimuli may play a role in location of host material by S. zeamais, and can be used to inform trap design for the control or monitoring of maize weevils. The lack of visual responses by the two grain borers is likely to relate to their different host-seeking behaviours and ecological background, which should be taken into account when devising control methods.
Neural suppression of irrelevant information underlies optimal working memory performance.
Zanto, Theodore P; Gazzaley, Adam
2009-03-11
Our ability to focus attention on task-relevant information and ignore distractions is reflected by differential enhancement and suppression of neural activity in sensory cortex (i.e., top-down modulation). Such selective, goal-directed modulation of activity may be intimately related to memory, such that the focus of attention biases the likelihood of successfully maintaining relevant information by limiting interference from irrelevant stimuli. Despite recent studies elucidating the mechanistic overlap between attention and memory, the relationship between top-down modulation of visual processing during working memory (WM) encoding, and subsequent recognition performance has not yet been established. Here, we provide neurophysiological evidence in healthy, young adults that top-down modulation of early visual processing (< 200 ms from stimulus onset) is intimately related to subsequent WM performance, such that the likelihood of successfully remembering relevant information is associated with limiting interference from irrelevant stimuli. The consequences of a failure to ignore distractors on recognition performance was replicated for two types of feature-based memory, motion direction and color. Moreover, attention to irrelevant stimuli was reflected neurally during the WM maintenance period as an increased memory load. These results suggest that neural enhancement of relevant information is not the primary determinant of high-level performance, but rather optimal WM performance is dependent on effectively filtering irrelevant information through neural suppression to prevent overloading a limited memory capacity.
Temporal Influence on Awareness
1995-12-01
43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and
Correlates of figure-ground segregation in fMRI.
Skiera, G; Petersen, D; Skalej, M; Fahle, M
2000-01-01
We investigated which correlates of figure-ground-segregation can be detected by means of functional magnetic resonance imaging (fMRI). Five subjects were scanned with a Siemens Vision 1.5 T system. Motion, colour, and luminance-defined checkerboards were presented with alternating control conditions containing one of the two features of the checkerboard. We find a segregation-specific activation in V1 for all subjects and all stimuli and conclude that neural mechanisms exist as early as in the primary visual cortex that are sensitive to figure-ground segregation.
Optimal estimator model for human spatial orientation
NASA Technical Reports Server (NTRS)
Borah, J.; Young, L. R.; Curry, R. E.
1979-01-01
A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.
Motion of glossy objects does not promote separation of lighting and surface colour
2017-01-01
The surface properties of an object, such as texture, glossiness or colour, provide important cues to its identity. However, the actual visual stimulus received by the eye is determined by both the properties of the object and the illumination. We tested whether operational colour constancy for glossy objects (the ability to distinguish changes in spectral reflectance of the object, from changes in the spectrum of the illumination) was affected by rotational motion of either the object or the light source. The different chromatic and geometric properties of the specular and diffuse reflections provide the basis for this discrimination, and we systematically varied specularity to control the available information. Observers viewed animations of isolated objects undergoing either lighting or surface-based spectral transformations accompanied by motion. By varying the axis of rotation, and surface patterning or geometry, we manipulated: (i) motion-related information about the scene, (ii) relative motion between the surface patterning and the specular reflection of the lighting, and (iii) image disruption caused by this motion. Despite large individual differences in performance with static stimuli, motion manipulations neither improved nor degraded performance. As motion significantly disrupts frame-by-frame low-level image statistics, we infer that operational constancy depends on a high-level scene interpretation, which is maintained in all conditions. PMID:29291113
Adaptation to visual or auditory time intervals modulates the perception of visual apparent motion
Zhang, Huihui; Chen, Lihan; Zhou, Xiaolin
2012-01-01
It is debated whether sub-second timing is subserved by a centralized mechanism or by the intrinsic properties of task-related neural activity in specific modalities (Ivry and Schlerf, 2008). By using a temporal adaptation task, we investigated whether adapting to different time intervals conveyed through stimuli in different modalities (i.e., frames of a visual Ternus display, visual blinking discs, or auditory beeps) would affect the subsequent implicit perception of visual timing, i.e., inter-stimulus interval (ISI) between two frames in a Ternus display. The Ternus display can induce two percepts of apparent motion (AM), depending on the ISI between the two frames: “element motion” for short ISIs, in which the endmost disc is seen as moving back and forth while the middle disc at the overlapping or central position remains stationary; “group motion” for longer ISIs, in which both discs appear to move in a manner of lateral displacement as a whole. In Experiment 1, participants adapted to either the typical “element motion” (ISI = 50 ms) or the typical “group motion” (ISI = 200 ms). In Experiments 2 and 3, participants adapted to a time interval of 50 or 200 ms through observing a series of two paired blinking discs at the center of the screen (Experiment 2) or hearing a sequence of two paired beeps (with pitch 1000 Hz). In Experiment 4, participants adapted to sequences of paired beeps with either low pitches (500 Hz) or high pitches (5000 Hz). After adaptation in each trial, participants were presented with a Ternus probe in which the ISI between the two frames was equal to the transitional threshold of the two types of motions, as determined by a pretest. Results showed that adapting to the short time interval in all the situations led to more reports of “group motion” in the subsequent Ternus probes; adapting to the long time interval, however, caused no aftereffect for visual adaptation but significantly more reports of group motion for auditory adaptation. These findings, suggesting amodal representation for sub-second timing across modalities, are interpreted in the framework of temporal pacemaker model. PMID:23133408
Vision-based flight control in the hawkmoth Hyles lineata
Windsor, Shane P.; Bomphrey, Richard J.; Taylor, Graham K.
2014-01-01
Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths’ responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths’ responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight. PMID:24335557
Vision-based flight control in the hawkmoth Hyles lineata.
Windsor, Shane P; Bomphrey, Richard J; Taylor, Graham K
2014-02-06
Vision is a key sensory modality for flying insects, playing an important role in guidance, navigation and control. Here, we use a virtual-reality flight simulator to measure the optomotor responses of the hawkmoth Hyles lineata, and use a published linear-time invariant model of the flight dynamics to interpret the function of the measured responses in flight stabilization and control. We recorded the forces and moments produced during oscillation of the visual field in roll, pitch and yaw, varying the temporal frequency, amplitude or spatial frequency of the stimulus. The moths' responses were strongly dependent upon contrast frequency, as expected if the optomotor system uses correlation-type motion detectors to sense self-motion. The flight dynamics model predicts that roll angle feedback is needed to stabilize the lateral dynamics, and that a combination of pitch angle and pitch rate feedback is most effective in stabilizing the longitudinal dynamics. The moths' responses to roll and pitch stimuli coincided qualitatively with these functional predictions. The moths produced coupled roll and yaw moments in response to yaw stimuli, which could help to reduce the energetic cost of correcting heading. Our results emphasize the close relationship between physics and physiology in the stabilization of insect flight.
Papera, Massimiliano; Richards, Anne
2016-05-01
Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.
Motion versus position in the perception of head-centred movement.
Freeman, Tom C A; Sumnall, Jane H
2002-01-01
Abstract. Observers can recover motion with respect to the head during an eye movement by comparing signals encoding retinal motion and the velocity of pursuit. Evidently there is a mismatch between these signals because perceived head-centred motion is not always veridical. One example is the Filehne illusion, in which a stationary object appears to move in the opposite direction to pursuit. Like the motion aftereffect, the phenomenal experience of the Filehne illusion is one in which the stimulus moves but does not seem to go anywhere. This raises problems when measuring the illusion by motion nulling because the more traditional technique confounds perceived motion with changes in perceived position. We devised a new nulling technique using global-motion stimuli that degraded familiar position cues but preserved cues to motion. Stimuli consisted of random-dot patterns comprising signal and noise dots that moved at the same retinal 'base' speed. Noise moved in random directions. In an eye-stationary speed-matching experiment we found noise slowed perceived retinal speed as 'coherence strength' (ie percentage of signal) was reduced. The effect occurred over the two-octave range of base speeds studied and well above direction threshold. When the same stimuli were combined with pursuit, observers were able to null the Filehne illusion by adjusting coherence. A power law relating coherence to retinal base speed fit the data well with a negative exponent. Eye-movement recordings showed that pursuit was quite accurate. We then tested the hypothesis that the stimuli found at the null-points appeared to move at the same retinal speed. Two observers supported the hypothesis, a third partially, and a fourth showed a small linear trend. In addition, the retinal speed found by the traditional Filehne technique was similar to the matches obtained with the global-motion stimuli. The results provide support for the idea that speed is the critical cue in head-centred motion perception.
Effective 3-D shape discrimination survives retinal blur.
Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M
2010-08-01
A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.
Őze, A; Puszta, A; Buzás, P; Kóbor, P; Braunitzer, G; Nagy, A
2018-06-21
Flashing light stimulation is often used to investigate the visual system. However, the magnitude of the effect of this stimulus on the various subcortical pathways is not well investigated. The signals of conscious vision are conveyed by the magnocellular, parvocellular and koniocellular pathways. Parvocellular and koniocellular pathways (or more precisely, the L-M opponent and S-cone isolating channels) can be accessed by isoluminant red-green (L-M) and S-cone isolating stimuli, respectively. The main goal of the present study was to explore how costimulation with strong white extrafoveal light flashes alters the perception of stimuli specific to these pathways. Eleven healthy volunteers with negative neurological and ophthalmological history were enrolled for the study. Isoluminance of L-M and S-cone isolating sine-wave gratings was set individually, using the minimum motion procedure. The contrast thresholds for these stimuli as well as for achromatic gratings were determined by an adaptive staircase procedure where subjects had to indicate the orientation (horizontal, oblique or vertical) of the gratings. Thresholds were then determined again while a strong white peripheral light flash was presented 50 ms before each trial. Peripheral light flashes significantly (p < 0.05) increased the contrast thresholds of the achromatic and S-cone isolating stimuli. The threshold elevation was especially marked in case of the achromatic stimuli. However, the contrast threshold for the L-M stimuli was not significantly influenced by the light flashes. We conclude that extrafoveally applied light flashes influence predominantly the perception of achromatic stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
First- and second-order processing in transient stereopsis.
Edwards, M; Pope, D R; Schor, C M
2000-01-01
Large-field stimuli were used to investigate the interaction of first- and second-order pathways in transient-stereo processing. Stimuli consisted of sinewave modulations in either the mean luminance (first-order stimulus) or the contrast (second-order stimulus) of a dynamic-random-dot field. The main results of the present study are that: (1) Depth could be extracted with both the first-order and second-order stimuli; (2) Depth could be extracted from dichoptically mixed first- and second-order stimuli, however, the same stimuli, when presented as a motion sequence, did not result in a motion percept. Based upon these findings we conclude that the transient-stereo system processes both first- and second-order signals, and that these two signals are pooled prior to the extraction of transient depth. This finding of interaction between first- and second-order stereoscopic processing is different from the independence that has been found with the motion system.
Dissociating emotion-induced blindness and hypervision.
Bocanegra, Bruno R; Zeelenberg, René
2009-12-01
Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.
Murnane, Kevin Sean; Howell, Leonard Lee
2010-08-15
Functional magnetic resonance imaging (fMRI) is a technique with significant potential to advance our understanding of multiple brain systems. However, when human subjects undergo fMRI studies they are typically conscious whereas pre-clinical fMRI studies typically utilize anesthesia, which complicates comparisons across studies. Therefore, we have developed an apparatus suitable for imaging conscious rhesus monkeys. In order to minimize subject stress and spatial motion, each subject was acclimated to the necessary procedures over several months. The effectiveness of this process was then evaluated, in fully trained subjects, by quantifying objective physiological measures. These physiological metrics were stable both within and across sessions and did not differ from when these same subjects were immobilized using standard primate handling procedures. Subject motion and blood oxygenation level dependent (BOLD) fMRI measurements were then evaluated by scanning subjects under three different conditions: the absence of stimulation, presentation of a visual stimulus, or administration of intravenous (i.v.) cocaine (0.3mg/kg). Spatial motion differed neither by condition nor along the three principal axes. In addition, maximum translational and rotational motion never exceeded one half of the voxel size (0.75 mm) or 1.5 degrees, respectively. Furthermore, the localization of changes in blood oxygenation closely matched those reported in previous studies using similar stimuli. These findings document the feasibility of fMRI data collection in conscious rhesus monkeys using these procedures and allow for the further study of the neural effects of psychoactive drugs. (c) 2010 Elsevier B.V. All rights reserved.
Feature-selective attention in healthy old age: a selective decline in selective attention?
Quigley, Cliodhna; Müller, Matthias M
2014-02-12
Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.
Modular Representation of Luminance Polarity In the Superficial Layers Of Primary Visual Cortex
Smith, Gordon B.; Whitney, David E.; Fitzpatrick, David
2016-01-01
Summary The spatial arrangement of luminance increments (ON) and decrements (OFF) falling on the retina provides a wealth of information used by central visual pathways to construct coherent representations of visual scenes. But how the polarity of luminance change is represented in the activity of cortical circuits remains unclear. Using wide-field epifluorescence and two-photon imaging we demonstrate a robust modular representation of luminance polarity (ON or OFF) in the superficial layers of ferret primary visual cortex. Polarity-specific domains are found with both uniform changes in luminance and single light/dark edges, and include neurons selective for orientation and direction of motion. The integration of orientation and polarity preference is evident in the selectivity and discrimination capabilities of most layer 2/3 neurons. We conclude that polarity selectivity is an integral feature of layer 2/3 neurons, ensuring that the distinction between light and dark stimuli is available for further processing in downstream extrastriate areas. PMID:26590348
Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-01-01
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245
Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-05-24
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.
The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye
NASA Astrophysics Data System (ADS)
Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis
2008-09-01
In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana
2015-05-01
To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Neuronal pattern separation of motion-relevant input in LIP activity
Berberian, Nareg; MacPherson, Amanda; Giraud, Eloïse; Richardson, Lydia
2016-01-01
In various regions of the brain, neurons discriminate sensory stimuli by decreasing the similarity between ambiguous input patterns. Here, we examine whether this process of pattern separation may drive the rapid discrimination of visual motion stimuli in the lateral intraparietal area (LIP). Starting with a simple mean-rate population model that captures neuronal activity in LIP, we show that overlapping input patterns can be reformatted dynamically to give rise to separated patterns of neuronal activity. The population model predicts that a key ingredient of pattern separation is the presence of heterogeneity in the response of individual units. Furthermore, the model proposes that pattern separation relies on heterogeneity in the temporal dynamics of neural activity and not merely in the mean firing rates of individual neurons over time. We confirm these predictions in recordings of macaque LIP neurons and show that the accuracy of pattern separation is a strong predictor of behavioral performance. Overall, results propose that LIP relies on neuronal pattern separation to facilitate decision-relevant discrimination of sensory stimuli. NEW & NOTEWORTHY A new hypothesis is proposed on the role of the lateral intraparietal (LIP) region of cortex during rapid decision making. This hypothesis suggests that LIP alters the representation of ambiguous inputs to reduce their overlap, thus improving sensory discrimination. A combination of computational modeling, theoretical analysis, and electrophysiological data shows that the pattern separation hypothesis links neural activity to behavior and offers novel predictions on the role of LIP during sensory discrimination. PMID:27881719
Efficient encoding of motion is mediated by gap junctions in the fly visual system.
Wang, Siwei; Borst, Alexander; Zaslavsky, Noga; Tishby, Naftali; Segev, Idan
2017-12-01
Understanding the computational implications of specific synaptic connectivity patterns is a fundamental goal in neuroscience. In particular, the computational role of ubiquitous electrical synapses operating via gap junctions remains elusive. In the fly visual system, the cells in the vertical-system network, which play a key role in visual processing, primarily connect to each other via axonal gap junctions. This network therefore provides a unique opportunity to explore the functional role of gap junctions in sensory information processing. Our information theoretical analysis of a realistic VS network model shows that within 10 ms following the onset of the visual input, the presence of axonal gap junctions enables the VS system to efficiently encode the axis of rotation, θ, of the fly's ego motion. This encoding efficiency, measured in bits, is near-optimal with respect to the physical limits of performance determined by the statistical structure of the visual input itself. The VS network is known to be connected to downstream pathways via a subset of triplets of the vertical system cells; we found that because of the axonal gap junctions, the efficiency of this subpopulation in encoding θ is superior to that of the whole vertical system network and is robust to a wide range of signal to noise ratios. We further demonstrate that this efficient encoding of motion by this subpopulation is necessary for the fly's visually guided behavior, such as banked turns in evasive maneuvers. Because gap junctions are formed among the axons of the vertical system cells, they only impact the system's readout, while maintaining the dendritic input intact, suggesting that the computational principles implemented by neural circuitries may be much richer than previously appreciated based on point neuron models. Our study provides new insights as to how specific network connectivity leads to efficient encoding of sensory stimuli.
Perceptual interaction of local motion signals
Nitzany, Eyal I.; Loe, Maren E.; Palmer, Stephanie E.; Victor, Jonathan D.
2016-01-01
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them. PMID:27902829
Perceptual interaction of local motion signals.
Nitzany, Eyal I; Loe, Maren E; Palmer, Stephanie E; Victor, Jonathan D
2016-11-01
Motion signals are a rich source of information used in many everyday tasks, such as segregation of objects from background and navigation. Motion analysis by biological systems is generally considered to consist of two stages: extraction of local motion signals followed by spatial integration. Studies using synthetic stimuli show that there are many kinds and subtypes of local motion signals. When presented in isolation, these stimuli elicit behavioral and neurophysiological responses in a wide range of species, from insects to mammals. However, these mathematically-distinct varieties of local motion signals typically co-exist in natural scenes. This study focuses on interactions between two kinds of local motion signals: Fourier and glider. Fourier signals are typically associated with translation, while glider signals occur when an object approaches or recedes. Here, using a novel class of synthetic stimuli, we ask how distinct kinds of local motion signals interact and whether context influences sensitivity to Fourier motion. We report that local motion signals of different types interact at the perceptual level, and that this interaction can include subthreshold summation and, in some subjects, subtle context-dependent changes in sensitivity. We discuss the implications of these observations, and the factors that may underlie them.
Modulation of neuronal responses during covert search for visual feature conjunctions
Buracas, Giedrius T.; Albright, Thomas D.
2009-01-01
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385
Modulation of neuronal responses during covert search for visual feature conjunctions.
Buracas, Giedrius T; Albright, Thomas D
2009-09-29
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.
Full body action remapping of peripersonal space: the case of walking.
Noel, Jean-Paul; Grivaz, Petr; Marmaroli, Patrick; Lissek, Herve; Blanke, Olaf; Serino, Andrea
2015-04-01
The space immediately surrounding the body, i.e. peripersonal space (PPS), is represented by populations of multisensory neurons, from a network of premotor and parietal areas, which integrate tactile stimuli from the body's surface with visual or auditory stimuli presented within a limited distance from the body. Here we show that PPS boundaries extend while walking. We used an audio-tactile interaction task to identify the location in space where looming sounds affect reaction time to tactile stimuli on the chest, taken as a proxy of the PPS boundary. The task was administered while participants either stood still or walked on a treadmill. In addition, in two separate experiments, subjects either received or not additional visual inputs, i.e. optic flow, implying a translation congruent with the direction of their walking. Results revealed that when participants were standing still, sounds boosted tactile processing when located within 65-100 cm from the participants' body, but not at farther distances. Instead, when participants were walking PPS expands as reflected in boosted tactile processing at ~1.66 m. This was found despite the fact the spatial relationship between the participant's body and the sound's source did not vary between the Standing and the Walking condition. This expansion effect on PPS boundaries due to walking was the same with or without optic flow, suggesting that kinematics and proprioceptive cues, rather than visual cues, are critical in triggering the effect. These results are the first to demonstrate an adaptation of the chest's PPS representation due to whole body motion and are compatible with the view that PPS constitutes a dynamic sensory-motor interface between the individual and the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reconfigurable OR and XOR logic gates based on dual responsive on-off-on micromotors.
Dong, Yonggang; Liu, Mei; Zhang, Hui; Dong, Bin
2016-04-21
In this study, we report a hemisphere-like micromotor. Intriguingly, the micromotor exhibits controllable on-off-on motion, which can be actuated by two different external stimuli (UV and NH3). Moreover, the moving direction of the micromotor can be manipulated by the direction in which UV and NH3 are applied. As a result, the motion accelerates when both stimuli are applied in the same direction and decelerates when the application directions are opposite to each other. More interestingly, the dual stimuli responsive micromotor can be utilized as a reconfigurable logic gate with UV and NH3 as the inputs and the motion of the micromotor as the output. By controlling the direction of the external stimuli, OR and XOR dual logic functions can be realized.
Measuring attention using induced motion.
Gogel, W C; Sharkey, T J
1989-01-01
Attention was measured by means of its effect upon induced motion. Perceived horizontal motion was induced in a vertically moving test spot by the physical horizontal motion of inducing objects. All stimuli were in a frontoparallel plane. The induced motion vectored with the physical motion to produce a clockwise or counterclockwise tilt in the apparent path of motion of the test spot. Either a single inducing object or two inducing objects moving in opposite directions were used. Twelve observers were instructed to attend to or to ignore the single inducing object while fixating the test object and, when the two opposing inducing objects were present, to attend to one inducing object while ignoring the other. Tracking of the test spot was visually monitored. The tilt of the path of apparent motion of the test spot was measured by tactile adjustment of a comparison rod. It was found that the measured tilt was substantially larger when the single inducing object was attended rather than ignored. For the two inducing objects, attending to one while ignoring the other clearly increased the effectiveness of the attended inducing object. The results are analyzed in terms of the distinction between voluntary and involuntary attention. The advantages of measuring attention by its effect on induced motion as compared with the use of a precueing procedure, and a hypothesis regarding the role of attention in modifying perceived spatial characteristics are discussed.
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spatiotopic coding during dynamic head tilt
Turi, Marco; Burr, David C.
2016-01-01
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding. NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation. PMID:27903636
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Ebner, Christian; Schroll, Henning; Winther, Gesche; Niedeggen, Michael; Hamker, Fred H
2015-09-01
How the brain decides which information to process 'consciously' has been debated over for decades without a simple explanation at hand. While most experiments manipulate the perceptual energy of presented stimuli, the distractor-induced blindness task is a prototypical paradigm to investigate gating of information into consciousness without or with only minor visual manipulation. In this paradigm, subjects are asked to report intervals of coherent dot motion in a rapid serial visual presentation (RSVP) stream, whenever these are preceded by a particular color stimulus in a different RSVP stream. If distractors (i.e., intervals of coherent dot motion prior to the color stimulus) are shown, subjects' abilities to perceive and report intervals of target dot motion decrease, particularly with short delays between intervals of target color and target motion. We propose a biologically plausible neuro-computational model of how the brain controls access to consciousness to explain how distractor-induced blindness originates from information processing in the cortex and basal ganglia. The model suggests that conscious perception requires reverberation of activity in cortico-subcortical loops and that basal-ganglia pathways can either allow or inhibit this reverberation. In the distractor-induced blindness paradigm, inadequate distractor-induced response tendencies are suppressed by the inhibitory 'hyperdirect' pathway of the basal ganglia. If a target follows such a distractor closely, temporal aftereffects of distractor suppression prevent target identification. The model reproduces experimental data on how delays between target color and target motion affect the probability of target detection. Copyright © 2015 Elsevier Inc. All rights reserved.
Attentional load modulates responses of human primary visual cortex to invisible stimuli.
Bahrami, Bahador; Lavie, Nilli; Rees, Geraint
2007-03-20
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
The Function and Organization of the Motor System Controlling Flight Maneuvers in Flies.
Lindsay, Theodore; Sustar, Anne; Dickinson, Michael
2017-02-06
Animals face the daunting task of controlling their limbs using a small set of highly constrained actuators. This problem is particularly demanding for insects such as Drosophila, which must adjust wing motion for both quick voluntary maneuvers and slow compensatory reflexes using only a dozen pairs of muscles. To identify strategies by which animals execute precise actions using sparse motor networks, we imaged the activity of a complete ensemble of wing control muscles in intact, flying flies. Our experiments uncovered a remarkably efficient logic in which each of the four skeletal elements at the base of the wing are equipped with both large phasically active muscles capable of executing large changes and smaller tonically active muscles specialized for continuous fine-scaled adjustments. Based on the responses to a broad panel of visual motion stimuli, we have developed a model by which the motor array regulates aerodynamically functional features of wing motion. VIDEO ABSTRACT. Copyright © 2017 Elsevier Ltd. All rights reserved.
Johnson, Kerri L; McKay, Lawrie S; Pollick, Frank E
2011-05-01
Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants' judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2-4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
Galashan, Daniela; Wittfoth, Matthias; Fehr, Thorsten; Herrmann, Manfred
2008-07-01
Behavioral and electrophysiological correlates of two Simon tasks were examined using comparable stimuli but different task-irrelevant and conflict-inducing stimulus features. Whereas target shape was always the task-relevant stimulus attribute, either target location (location-based task) or motion direction within the target stimuli (motion-based task) was used as a source of conflict. Data from ten healthy participants who performed both tasks are presented. In the motion-based task the incompatible condition showed smaller P300 amplitudes at Pz than the compatible condition and the location-based task yielded a trend towards a reduced P300 amplitude in the incompatible condition. For both tasks, no P300 latency differences between the conditions were found at Pz. The results suggest that the motion-based task elicits behavioral and electrophysiological effects comparable with regular Simon tasks. As all stimuli in the motion-based Simon task were presented centrally the present data strongly argue against the attention-shifting account as an explanatory approach.
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
NASA Technical Reports Server (NTRS)
Kayanickupuram, A. J.; Ramos, K. A.; Cordova, M. L.; Wood, S. J.
2009-01-01
The need to resolve new patterns of sensory feedback in altered gravitoinertial environments requires cognitive processes to develop appropriate reference frames for spatial orientation awareness. The purpose of this study was to examine deficits in spatial cognitive performance during adaptation to conflicting tilt-translation stimuli. Fourteen subjects were tilted within a lighted enclosure that simultaneously translated at one of 3 frequencies. Tilt and translation motion was synchronized to maintain the resultant gravitoinertial force aligned with the longitudinal body axis, resulting in a mismatch analogous to spaceflight in which the canals and vision signal tilt while the otoliths do not. Changes in performance on different spatial cognitive tasks were compared 1) without motion, 2) with tilt motion alone (pitch at 0.15, 0.3 and 0.6 Hz or roll at 0.3 Hz), and 3) with conflicting tilt-translation motion. The adaptation paradigm was continued for up to 30 min or until the onset of nausea. The order of the adaptation conditions were counter-balanced across 4 different test sessions. There was a significant effect of stimulus frequency on both motion sickness and spatial cognitive performance. Only 3 of 14 were able to complete the full 30 min protocol at 0.15 Hz, while 7 of 14 completed 0.3 Hz and 13 of 14 completed 0.6 Hz. There were no changes in simple visual-spatial cognitive tests, e.g., mental rotation or match-to-sample. There were significant deficits during 0.15 Hz adaptation in both accuracy and reaction time during a spatial reference task in which subjects are asked to identify a match of a 3D reoriented cube assemblage. Our results are consistent with antidotal reports of cognitive impairment that are common during sensorimotor adaptation with G-transitions. We conclude that these cognitive deficits stem from the ambiguity of spatial reference frames for central processing of inertial motion cues.
Moors, Pieter; Wagemans, Johan; de-Wit, Lee
2014-01-01
Continuous flash suppression (CFS) is a powerful interocular suppression technique, which is often described as an effective means to reliably suppress stimuli from visual awareness. Suppression through CFS has been assumed to depend upon a reduction in (retinotopically specific) neural adaptation caused by the continual updating of the contents of the visual input to one eye. In this study, we started from the observation that suppressing a moving stimulus through CFS appeared to be more effective when using a mask that was actually more prone to retinotopically specific neural adaptation, but in which the properties of the mask were more similar to those of the to-be-suppressed stimulus. In two experiments, we find that using a moving Mondrian mask (i.e., one that includes motion) is more effective in suppressing a moving stimulus than a regular CFS mask. The observed pattern of results cannot be explained by a simple simulation that computes the degree of retinotopically specific neural adaptation over time, suggesting that this kind of neural adaptation does not play a large role in predicting the differences between conditions in this context. We also find some evidence consistent with the idea that the most effective CFS mask is the one that matches the properties (speed) of the suppressed stimulus. These results question the general importance of retinotopically specific neural adaptation in CFS, and potentially help to explain an implicit trend in the literature to adapt one's CFS mask to match one's to-be-suppressed stimuli. Finally, the results should help to guide the methodological development of future research where continuous suppression of moving stimuli is desired.
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2017-04-15
In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential. Mice and non-human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies. Here we investigated the structure and statistics of the vestibular input experienced by mice versus non-human primates during natural behaviours, and found important differences. Our data establish that the structure and statistics of natural signals in non-human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input. These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self-motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self-motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self-motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power-law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self-motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self-motion stimuli are fundamentally different in rodents and primates. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2017-01-01
Key points In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential.Mice and non‐human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies.Here we investigated the structure and statistics of the vestibular input experienced by mice versus non‐human primates during natural behaviours, and found important differences.Our data establish that the structure and statistics of natural signals in non‐human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input.These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. Abstract It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self‐motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self‐motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self‐motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power‐law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self‐motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self‐motion stimuli are fundamentally different in rodents and primates. PMID:28083981
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Visual and proprioceptive interaction in patients with bilateral vestibular loss☆
Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564
Visual and proprioceptive interaction in patients with bilateral vestibular loss.
Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.
Enduring stereoscopic motion aftereffects induced by prolonged adaptation.
Bowd, C; Rose, D; Phinney, R E; Patterson, R
1996-11-01
This study investigated the effects of prolonged adaptation on the recovery of the stereoscopic motion aftereffect (adaptation induced by moving binocular disparity information). The adapting and test stimuli were stereoscopic grating patterns created from disparity, embedded in dynamic random-dot stereograms. Motion aftereffects induced by luminance stimuli were included in the study for comparison. Adaptation duration was either 1, 2, 4, 8, 16, 32 or 64 min and the duration of the ensuing aftereffect was the variable of interest. The results showed that aftereffect duration was proportional to the square root of adaptation duration for both stereoscopic and luminance stimuli; on log-log axes, the relation between aftereffect duration and adaptation duration was a power law with the slope near 0.5 in both cases. For both kinds of stimuli, there was no sign of adaptation saturation even at the longest adaptation duration.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
NASA Astrophysics Data System (ADS)
Kim, Chang Soo; Ingato, Dominique; Wilder-Smith, Petra; Chen, Zhongping; Kwon, Young Jik
2018-01-01
A key design consideration in developing contrast agents is obtaining distinct, multiple signal changes in diseased tissue. Plasmonic gold nanoparticles (Au NPs) have been developed as contrast agents due to their strong surface plasmon resonance (SPR). This study aims to demonstrate that stimuli-responsive plasmonic Au nanoclusters (Au NCs) can be used as a contrast agent for optical coherence tomography (OCT) in detecting early-stage cancer. Au NPs were clustered via acid-cleavable linkers to synthesize Au NCs that disassemble under mildly acidic conditions into individual Au NPs, simultaneously diminishing SPR effect (quantified by scattering intensity) and increasing Brownian motion (quantified by Doppler variance). The acid-triggered morphological and accompanying optico-physical property changes of the acid-disassembling Au NCs were confirmed by TEM, DLS, UV/Vis, and OCT. Stimuli-responsive Au NCs were applied in a hamster check pouch model carrying early-stage squamous carcinoma tissue. The tissue was visualized by OCT imaging, which showed reduced scattering intensity and increased Doppler variance in the dysplastic tissue. This study demonstrates the promise of diagnosing early-stage cancer using molecularly programmable, inorganic nanomaterial-based contrast agents that are capable of generating multiple, stimuli-triggered diagnostic signals in early-stage cancer.[Figure not available: see fulltext.
Chen, Ming; Wu, Si; Lu, Haidong D.; Roe, Anna W.
2013-01-01
Interpreting population responses in the primary visual cortex (V1) remains a challenge especially with the advent of techniques measuring activations of large cortical areas simultaneously with high precision. For successful interpretation, a quantitatively precise model prediction is of great importance. In this study, we investigate how accurate a spatiotemporal filter (STF) model predicts average response profiles to coherently drifting random dot motion obtained by optical imaging of intrinsic signals in V1 of anesthetized macaques. We establish that orientation difference maps, obtained by subtracting orthogonal axis-of-motion, invert with increasing drift speeds, consistent with the motion streak effect. Consistent with perception, the speed at which the map inverts (the critical speed) depends on cortical eccentricity and systematically increases from foveal to parafoveal. We report that critical speeds and response maps to drifting motion are excellently reproduced by the STF model. Our study thus suggests that the STF model is quantitatively accurate enough to be used as a first model of choice for interpreting responses obtained with intrinsic imaging methods in V1. We show further that this good quantitative correspondence opens the possibility to infer otherwise not easily accessible population receptive field properties from responses to complex stimuli, such as drifting random dot motions. PMID:23197457
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
Visual memories for perceived length are well preserved in older adults.
Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N
2011-09-15
Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
A visual horizon affects steering responses during flight in fruit flies.
Caballero, Jorge; Mazo, Chantell; Rodriguez-Pinto, Ivan; Theobald, Jamie C
2015-09-01
To navigate well through three-dimensional environments, animals must in some way gauge the distances to objects and features around them. Humans use a variety of visual cues to do this, but insects, with their small size and rigid eyes, are constrained to a more limited range of possible depth cues. For example, insects attend to relative image motion when they move, but cannot change the optical power of their eyes to estimate distance. On clear days, the horizon is one of the most salient visual features in nature, offering clues about orientation, altitude and, for humans, distance to objects. We set out to determine whether flying fruit flies treat moving features as farther off when they are near the horizon. Tethered flies respond strongly to moving images they perceive as close. We measured the strength of steering responses while independently varying the elevation of moving stimuli and the elevation of a virtual horizon. We found responses to vertical bars are increased by negative elevations of their bases relative to the horizon, closely correlated with the inverse of apparent distance. In other words, a bar that dips far below the horizon elicits a strong response, consistent with using the horizon as a depth cue. Wide-field motion also had an enhanced effect below the horizon, but this was only prevalent when flies were additionally motivated with hunger. These responses may help flies tune behaviors to nearby objects and features when they are too far off for motion parallax. © 2015. Published by The Company of Biologists Ltd.
Spatial Alignment and Response Hand in Geometric and Motion Illusions
Scocchia, Lisa; Paroli, Michela; Stucchi, Natale A.; Sedda, Anna
2017-01-01
Perception of visual illusions is susceptible to manipulation of their spatial properties. Further, illusions can sometimes affect visually guided actions, especially the movement planning phase. Remarkably, visual properties of objects related to actions, such as affordances, can prime more accurate perceptual judgements. In spite of the amount of knowledge available on affordances and on the influence of illusions on actions (or lack of thereof), virtually nothing is known about the reverse: the influence of action-related parameters on the perception of visual illusions. Here, we tested a hypothesis that the response mode (that can be linked to action-relevant features) can affect perception of the Poggendorff (geometric) and of the Vanishing Point (motion) illusion. We explored the role of hand dominance (right dominant versus left non-dominant hand) and its interaction with stimulus spatial alignment (i.e., congruency between visual stimulus and the hand used for responses). Seventeen right-handed participants performed our tasks with their right and left hands, and the stimuli were presented in regular and mirror-reversed views. It turned out that the regular version of the Poggendorff display generates a stronger illusion compared to the mirror version, and that participants are less accurate and show more variability when they use their left hand in responding to the Vanishing Point. In summary, our results show that there is a marginal effect of hand precision in motion related illusions, which is absent for geometrical illusions. In the latter, attentional anisometry seems to play a greater role in generating the illusory effect. Taken together, our findings suggest that changes in the response mode (here: manual action-related parameters) do not necessarily affect illusion perception. Therefore, although intuitively speaking there should be at least unidirectional effects of perception on action, and possible interactions between the two systems, this simple study still suggests their relative independence, except for the case when the less skilled (non-dominant) hand and arguably more deliberate responses are used. PMID:28769830
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Yang, Jinfang; Wang, Qian; He, Fenfen; Ding, Yanxia; Sun, Qingyan; Hua, Tianmiao; Xi, Minmin
2016-01-01
Previous studies have reported inconsistent effects of dietary restriction (DR) on cortical inhibition. To clarify this issue, we examined the response properties of neurons in the primary visual cortex (V1) of DR and control groups of cats using in vivo extracellular single-unit recording techniques, and assessed the synthesis of inhibitory neurotransmitter GABA in the V1 of cats from both groups using immunohistochemical and Western blot techniques. Our results showed that the response of V1 neurons to visual stimuli was significantly modified by DR, as indicated by an enhanced selectivity for stimulus orientations and motion directions, decreased visually-evoked response, lowered spontaneous activity and increased signal-to-noise ratio in DR cats relative to control cats. Further, it was shown that, accompanied with these changes of neuronal responsiveness, GABA immunoreactivity and the expression of a key GABA-synthesizing enzyme GAD67 in the V1 were significantly increased by DR. These results demonstrate that DR may retard brain aging by increasing the intracortical inhibition effect and improve the function of visual cortical neurons in visual information processing. This DR-induced elevation of cortical inhibition may favor the brain in modulating energy expenditure based on food availability.
Sun, Qingyan; Hua, Tianmiao; Xi, Minmin
2016-01-01
Previous studies have reported inconsistent effects of dietary restriction (DR) on cortical inhibition. To clarify this issue, we examined the response properties of neurons in the primary visual cortex (V1) of DR and control groups of cats using in vivo extracellular single-unit recording techniques, and assessed the synthesis of inhibitory neurotransmitter GABA in the V1 of cats from both groups using immunohistochemical and Western blot techniques. Our results showed that the response of V1 neurons to visual stimuli was significantly modified by DR, as indicated by an enhanced selectivity for stimulus orientations and motion directions, decreased visually-evoked response, lowered spontaneous activity and increased signal-to-noise ratio in DR cats relative to control cats. Further, it was shown that, accompanied with these changes of neuronal responsiveness, GABA immunoreactivity and the expression of a key GABA-synthesizing enzyme GAD67 in the V1 were significantly increased by DR. These results demonstrate that DR may retard brain aging by increasing the intracortical inhibition effect and improve the function of visual cortical neurons in visual information processing. This DR-induced elevation of cortical inhibition may favor the brain in modulating energy expenditure based on food availability. PMID:26863207
Qian, Ning; Dayan, Peter
2013-01-01
A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Visual control of flight speed in Drosophila melanogaster.
Fry, Steven N; Rohrseitz, Nicola; Straw, Andrew D; Dickinson, Michael H
2009-04-01
Flight control in insects depends on self-induced image motion (optic flow), which the visual system must process to generate appropriate corrective steering maneuvers. Classic experiments in tethered insects applied rigorous system identification techniques for the analysis of turning reactions in the presence of rotating pattern stimuli delivered in open-loop. However, the functional relevance of these measurements for visual free-flight control remains equivocal due to the largely unknown effects of the highly constrained experimental conditions. To perform a systems analysis of the visual flight speed response under free-flight conditions, we implemented a 'one-parameter open-loop' paradigm using 'TrackFly' in a wind tunnel equipped with real-time tracking and virtual reality display technology. Upwind flying flies were stimulated with sine gratings of varying temporal and spatial frequencies, and the resulting speed responses were measured from the resulting flight speed reactions. To control flight speed, the visual system of the fruit fly extracts linear pattern velocity robustly over a broad range of spatio-temporal frequencies. The speed signal is used for a proportional control of flight speed within locomotor limits. The extraction of pattern velocity over a broad spatio-temporal frequency range may require more sophisticated motion processing mechanisms than those identified in flies so far. In Drosophila, the neuromotor pathways underlying flight speed control may be suitably explored by applying advanced genetic techniques, for which our data can serve as a baseline. Finally, the high-level control principles identified in the fly can be meaningfully transferred into a robotic context, such as for the robust and efficient control of autonomous flying micro air vehicles.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
Zold, Camila L.
2015-01-01
The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643
Visual processing of moving and static self body-parts.
Frassinetti, Francesca; Pavani, Francesco; Zamagni, Elisa; Fusaroli, Giulia; Vescovi, Massimo; Benassi, Mariagrazia; Avanzi, Stefano; Farnè, Alessandro
2009-07-01
Humans' ability to recognize static images of self body-parts can be lost following a lesion of the right hemisphere [Frassinetti, F., Maini, M., Romualdi, S., Galante, E., & Avanzi, S. (2008). Is it mine? Hemispheric asymmetries in corporeal self-recognition. Journal of Cognitive Neuroscience, 20, 1507-1516]. Here we investigated whether the visual information provided by the movement of self body-parts may be separately processed by right brain-damaged (RBD) patients and constitute a valuable cue to reduce their deficit in self body-parts processing. To pursue these aims, neurological healthy subjects and RBD patients were submitted to a matching-task of a pair of subsequent visual stimuli, in two conditions. In the dynamic condition, participants were shown movies of moving body-parts (hand, foot, arm and leg); in the static condition, participants were shown still images of the same body-parts. In each condition, on half of the trials at least one stimulus in the pair was from the participant's own body ('Self' condition), whereas on the remaining half of the trials both stimuli were from another person ('Other' condition). Results showed that in healthy participants the self-advantage was present when processing both static and dynamic body-parts, but it was more important in the latter condition. In RBD patients, however, the self-advantage was absent in the static, but present in the dynamic body-parts condition. These findings suggest that visual information from self body-parts in motion may be processed independently in patients with impaired static self-processing, thus pointing to a modular organization of the mechanisms responsible for the self/other distinction.
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
Differential effect of visual motion adaption upon visual cortical excitability.
Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer
2017-03-01
The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.
Purely temporal figure-ground segregation.
Kandil, F I; Fahle, M
2001-05-01
Visual figure-ground segregation is achieved by exploiting differences in features such as luminance, colour, motion or presentation time between a figure and its surround. Here we determine the shortest delay times required for figure-ground segregation based on purely temporal features. Previous studies usually employed stimulus onset asynchronies between figure- and ground-containing possible artefacts based on apparent motion cues or on luminance differences. Our stimuli systematically avoid these artefacts by constantly showing 20 x 20 'colons' that flip by 90 degrees around their midpoints at constant time intervals. Colons constituting the background flip in-phase whereas those constituting the target flip with a phase delay. We tested the impact of frequency modulation and phase reduction on target detection. Younger subjects performed well above chance even at temporal delays as short as 13 ms, whilst older subjects required up to three times longer delays in some conditions. Figure-ground segregation can rely on purely temporal delays down to around 10 ms even in the absence of luminance and motion artefacts, indicating a temporal precision of cortical information processing almost an order of magnitude lower than the one required for some models of feature binding in the visual cortex [e.g. Singer, W. (1999), Curr. Opin. Neurobiol., 9, 189-194]. Hence, in our experiment, observers are unable to use temporal stimulus features with the precision required for these models.
NASA Technical Reports Server (NTRS)
Park, Brian V. (Inventor)
1997-01-01
An immersive cyberspace system is presented which provides visual, audible, and vibrational inputs to a subject remaining in neutral immersion, and also provides for subject control input. The immersive cyberspace system includes a relaxation chair and a neutral immersion display hood. The relaxation chair supports a subject positioned thereupon, and places the subject in position which merges a neutral body position, the position a body naturally assumes in zero gravity, with a savasana yoga position. The display hood, which covers the subject's head, is configured to produce light images and sounds. An image projection subsystem provides either external or internal image projection. The display hood includes a projection screen moveably attached to an opaque shroud. A motion base supports the relaxation chair and produces vibrational inputs over a range of about 0-30 Hz. The motion base also produces limited translation and rotational movements of the relaxation chair. These limited translational and rotational movements, when properly coordinated with visual stimuli, constitute motion cues which create sensations of pitch, yaw, and roll movements. Vibration transducers produce vibrational inputs from about 20 Hz to about 150 Hz. An external computer, coupled to various components of the immersive cyberspace system, executes a software program and creates the cyberspace environment. One or more neutral hand posture controllers may be coupled to the external computer system and used to control various aspects of the cyberspace environment, or to enter data during the cyberspace experience.
Large, I.; Bridge, H.; Ahmed, B.; Clare, S.; Kolasinski, J.; Lam, W. W.; Miller, K. L.; Dyrby, T. B.; Parker, A. J.; Smith, J. E. T.; Daubney, G.; Sallet, J.; Bell, A. H.; Krug, K.
2016-01-01
Extrastriate visual area V5/MT in primates is defined both structurally by myeloarchitecture and functionally by distinct responses to visual motion. Myelination is directly identifiable from postmortem histology but also indirectly by image contrast with structural magnetic resonance imaging (sMRI). First, we compared the identification of V5/MT using both sMRI and histology in Rhesus macaques. A section-by-section comparison of histological slices with in vivo and postmortem sMRI for the same block of cortical tissue showed precise correspondence in localizing heavy myelination for V5/MT and neighboring MST. Thus, sMRI in macaques accurately locates histologically defined myelin within areas known to be motion selective. Second, we investigated the functionally homologous human motion complex (hMT+) using high-resolution in vivo imaging. Humans showed considerable intersubject variability in hMT+ location, when defined with myelin-weighted sMRI signals to reveal structure. When comparing sMRI markers to functional MRI in response to moving stimuli, a region of high myelin signal was generally located within the hMT+ complex. However, there were considerable differences in the alignment of structural and functional markers between individuals. Our results suggest that variation in area identification for hMT+ based on structural and functional markers reflects individual differences in human regional brain architecture. PMID:27371764
Studies of the Interactions Between Vestibular Function and Tactual Orientation Display Systems
NASA Technical Reports Server (NTRS)
Cholewiak, Roger W.; Reschke, Millard F.
1997-01-01
When humans experience conditions in which internal vestibular cues to movement or spatial location are challenged or contradicted by external visual information, the result can be spatial disorientation, often leading to motion sickness. Spatial disorientation can occur in any situation in which the individual is passively moved in the environment, but is most common in automotive, aircraft, or undersea travel. Significantly, the incidence of motion sickness in space travel is great: The majority of individuals in Shuttle operations suffer from the syndrome. Even after the space-sickness-producing influences of spatial disorientation dissipate, usually within several days, there are other situations in which, because of the absence of reliable or familiar vestibular cues, individuals in space still experience disorientation, resulting in a reliance on the already preoccupied sense of vision. One possible technique to minimize the deleterious effects of spatial disorientation might be to present attitude information (including orientation, direction, and motion) through another less-used sensory modality - the sense of touch. Data from experiences with deaf and blind persons indicate that this channel can provide useful communication and mobility information on a real-time basis. More recently, technologies have developed to present effective attitude information to pilots in situations in which dangerously ambiguous and conflicting visual and vestibular sensations occur. This summers project at NASA-Johnson Space Center will evaluate the influence of motion-based spatial disorientation on the perception of tactual stimuli representing veridical position and orientation information, presented by new dynamic vibrotactile array display technologies. In addition, the possibility will be explored that tactile presentations of motion and direction from this alternative modality might be useful in mitigating or alleviating spatial disorientation produced by multi-axis rotatory systems, monitored by physiological recording techniques developed at JSC.
Efficient spiking neural network model of pattern motion selectivity in visual cortex.
Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L
2014-07-01
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.
Exogenous attention influences visual short-term memory in infants.
Ross-Sheehy, Shannon; Oakes, Lisa M; Luck, Steven J
2011-05-01
Two experiments examined the hypothesis that developing visual attentional mechanisms influence infants' Visual Short-Term Memory (VSTM) in the context of multiple items. Five- and 10-month-old infants (N = 76) received a change detection task in which arrays of three differently colored squares appeared and disappeared. On each trial one square changed color and one square was cued; sometimes the cued item was the changing item, and sometimes the changing item was not the cued item. Ten-month-old infants exhibited enhanced memory for the cued item when the cue was a spatial pre-cue (Experiment 1) and 5-month-old infants exhibited enhanced memory for the cued item when the cue was relative motion (Experiment 2). These results demonstrate for the first time that infants younger than 6 months can encode information in VSTM about individual items in multiple-object arrays, and that attention-directing cues influence both perceptual and VSTM encoding of stimuli in infants as in adults.
How bodies and voices interact in early emotion perception.
Jessen, Sarah; Obleser, Jonas; Kotz, Sonja A
2012-01-01
Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta-band oscillations (15-25 Hz) primarily reflecting biological motion perception was modulated 200-400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing.
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
Relativistic compression and expansion of experiential time in the left and right space.
Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano
2008-03-05
Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.
Motion coherence affects human perception and pursuit similarly.
Beutter, B R; Stone, L S
2000-01-01
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.
Motion coherence affects human perception and pursuit similarly
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
2000-01-01
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.
Inattentional blindness is influenced by exposure time not motion speed.
Kreitz, Carina; Furley, Philip; Memmert, Daniel
2016-01-01
Inattentional blindness is a striking phenomenon in which a salient object within the visual field goes unnoticed because it is unexpected, and attention is focused elsewhere. Several attributes of the unexpected object, such as size and animacy, have been shown to influence the probability of inattentional blindness. At present it is unclear whether or how the speed of a moving unexpected object influences inattentional blindness. We demonstrated that inattentional blindness rates are considerably lower if the unexpected object moves more slowly, suggesting that it is the mere exposure time of the object rather than a higher saliency potentially induced by higher speed that determines the likelihood of its detection. Alternative explanations could be ruled out: The effect is not based on a pop-out effect arising from different motion speeds in relation to the primary-task stimuli (Experiment 2), nor is it based on a higher saliency of slow-moving unexpected objects (Experiment 3).
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Sight restoration after congenital blindness does not reinstate alpha oscillatory activity in humans
Bottari, Davide; Troje, Nikolaus F.; Ley, Pia; Hense, Marlene; Kekunnaya, Ramesh; Röder, Brigitte
2016-01-01
Functional brain development is characterized by sensitive periods during which experience must be available to allow for the full development of neural circuits and associated behavior. Yet, only few neural markers of sensitive period plasticity in humans are known. Here we employed electroencephalographic recordings in a unique sample of twelve humans who had been blind from birth and regained sight through cataract surgery between four months and 16 years of age. Two additional control groups were tested: a group of visually impaired individuals without a history of total congenital blindness and a group of typically sighted individuals. The EEG was recorded while participants performed a visual discrimination task involving intact and scrambled biological motion stimuli. Posterior alpha and theta oscillations were evaluated. The three groups showed indistinguishable behavioral performance and in all groups evoked theta activity varied with biological motion processing. By contrast, alpha oscillatory activity was significantly reduced only in individuals with a history of congenital cataracts. These data document on the one hand brain mechanisms of functional recovery (related to theta oscillations) and on the other hand, for the first time, a sensitive period for the development of alpha oscillatory activity in humans. PMID:27080158
Motion-induced blindness and microsaccades: cause and effect
Bonneh, Yoram S.; Donner, Tobias H.; Sagi, Dov; Fried, Moshe; Cooperman, Alexander; Heeger, David J; Arieli, Amos
2010-01-01
It has been suggested that subjective disappearance of visual stimuli results from a spontaneous reduction of microsaccade rate causing image stabilization, enhanced adaptation and a consequent fading. In motion-induced-blindness (MIB) salient visual targets disappear intermittently when surrounded by a moving pattern. We investigated whether changes in microsaccade rate can account for MIB. We first determined that the moving mask does not affect microsaccade metrics (rate, magnitude, and temporal distribution). We then compared the dynamics of microsaccades during reported illusory disappearance (MIB) and physical disappearance (Replay) of a salient peripheral target. We found large modulations of microsaccade rate following perceptual transitions, whether illusory (MIB) or real (Replay). For MIB, the rate also decreased prior to disappearance and increased prior to reappearance. Importantly, MIB persisted in the presence of microsaccades although sustained microsaccade rate was lower during invisible than visible periods. These results suggest that the microsaccade system reacts to changes in visibility, but microsaccades also modulate MIB. The latter modulation is well described by a Poisson model of the perceptual transitions assuming that the probability for reappearance and disappearance is modulated following a microsaccade. Our results show that microsaccades counteract disappearance, but are neither necessary nor sufficient to account for MIB. PMID:21172899
Examining the cognitive demands of analogy instructions compared to explicit instructions.
Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich
2016-10-01
In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
Visual field asymmetries in visual evoked responses
Hagler, Donald J.
2014-01-01
Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151
Correa-Jaraba, Kenia S.; Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2016-01-01
The main aim of this study was to examine the effects of aging on event-related brain potentials (ERPs) associated with the automatic detection of unattended infrequent deviant and novel auditory stimuli (Mismatch Negativity, MMN) and with the orienting to these stimuli (P3a component), as well as the effects on ERPs associated with reorienting to relevant visual stimuli (Reorienting Negativity, RON). Participants were divided into three age groups: (1) Young: 21–29 years old; (2) Middle-aged: 51–64 years old; and (3) Old: 65–84 years old. They performed an auditory-visual distraction-attention task in which they were asked to attend to visual stimuli (Go, NoGo) and to ignore auditory stimuli (S: standard, D: deviant, N: novel). Reaction times (RTs) to Go visual stimuli were longer in old and middle-aged than in young participants. In addition, in all three age groups, longer RTs were found when Go visual stimuli were preceded by novel relative to deviant and standard auditory stimuli, indicating a distraction effect provoked by novel stimuli. ERP components were identified in the Novel minus Standard (N-S) and Deviant minus Standard (D-S) difference waveforms. In the N-S condition, MMN latency was significantly longer in middle-aged and old participants than in young participants, indicating a slowing of automatic detection of changes. The following results were observed in both difference waveforms: (1) the P3a component comprised two consecutive phases in all three age groups—an early-P3a (e-P3a) that may reflect the orienting response toward the irrelevant stimulation and a late-P3a (l-P3a) that may be a correlate of subsequent evaluation of the infrequent unexpected novel or deviant stimuli; (2) the e-P3a, l-P3a, and RON latencies were significantly longer in the Middle-aged and Old groups than in the Young group, indicating delay in the orienting response to and the subsequent evaluation of unattended auditory stimuli, and in the reorienting of attention to relevant (Go) visual stimuli, respectively; and (3) a significantly smaller e-P3a amplitude in Middle-aged and Old groups, indicating a deficit in the orienting response to irrelevant novel and deviant auditory stimuli. PMID:27065004
Endogenous Sequential Cortical Activity Evoked by Visual Stimuli
Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael
2015-01-01
Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Neural correlates of the perception of dynamic versus static facial expressions of emotion.
Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit
2011-04-20
This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.
Paranormal believers are more prone to illusory agency detection than skeptics.
van Elk, Michiel
2013-09-01
It has been hypothesized that illusory agency detection is at the basis of belief in supernatural agents and paranormal beliefs. In the present study a biological motion perception task was used to study illusory agency detection in a group of skeptics and a group of paranormal believers. Participants were required to detect the presence or absence of a human agent in a point-light display. It was found that paranormal believers had a lower perceptual sensitivity than skeptics, which was due to a response bias to 'yes' for stimuli in which no agent was present. The relation between paranormal beliefs and illusory agency detection held only for stimuli with low to intermediate ambiguity, but for stimuli with a high number of visual distractors responses of believers and skeptics were at the same level. Furthermore, it was found that illusory agency detection was unrelated to traditional religious belief and belief in witchcraft, whereas paranormal beliefs (i.e. Psi, spiritualism, precognition, superstition) were strongly related to illusory agency detection. These findings qualify the relation between illusory pattern perception and supernatural and paranormal beliefs and suggest that paranormal beliefs are strongly related to agency detection biases. Copyright © 2013 Elsevier Inc. All rights reserved.
Encoding of Spatial Attention by Primate Prefrontal Cortex Neuronal Ensembles
Treue, Stefan
2018-01-01
Abstract Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n < 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared with the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and nonspatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. PMID:29568798
Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.
Alexander, Gerianne M; Charles, Nora
2009-06-01
An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.
A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.
Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei
2014-09-19
Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
2011-01-01
Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
The role of human ventral visual cortex in motion perception
Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene
2013-01-01
Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030
Cortical Integration of Audio-Visual Information
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
2013-01-01
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Contextual effects on preattentive processing of sound motion as revealed by spatial MMN.
Shestopalova, L B; Petropavlovskaia, E A; Vaitulevich, S Ph; Nikitin, N I
2015-04-01
The magnitude of spatial distance between sound stimuli is critically important for their preattentive discrimination, yet the effect of stimulus context on auditory motion processing is not clear. This study investigated the effects of acoustical change and stimulus context on preattentive spatial change detection. Auditory event-related potentials (ERPs) were recorded for stationary midline noises and two patterns of sound motion produced by linear or abrupt changes of interaural time differences. Each of the three types of stimuli was used as standard or deviant in different blocks. Context effects on mismatch negativity (MMN) elicited by stationary and moving sound stimuli were investigated by reversing the role of standard and deviant stimuli, while the acoustical stimulus parameters were kept the same. That is, MMN amplitudes were calculated by subtracting ERPs to identical stimuli presented as standard in one block and deviant in another block. In contrast, effects of acoustical change on MMN amplitudes were calculated by subtracting ERPs of standards and deviants presented within the same block. Preattentive discrimination of moving and stationary sounds indexed by MMN was strongly dependent on the stimulus context. Higher MMNs were produced in oddball configurations where deviance represented increments of the sound velocity, as compared to configurations with velocity decrements. The effect of standard-deviant reversal was more pronounced with the abrupt sound displacement than with gradual sound motion. Copyright © 2015 Elsevier B.V. All rights reserved.
Motion Prediction and the Velocity Effect in Children
ERIC Educational Resources Information Center
Benguigui, Nicolas; Broderick, Michael P.; Baures, Robin; Amorim, Michel-Ange
2008-01-01
In coincidence-timing studies, children have been shown to respond too early to slower stimuli and too late to faster stimuli. To examine this velocity effect, children aged 6, 7.5, 9, 10.5, and adults were tested with two different velocities in a prediction-motion task which consisted of judging, after the occlusion of the final part of its…
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
ERIC Educational Resources Information Center
Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn
2011-01-01
Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…
Aging and the discrimination of 3-D shape from motion and binocular disparity.
Norman, J Farley; Holmin, Jessica S; Beers, Amanda M; Cheeseman, Jacob R; Ronning, Cecilia; Stethen, Angela G; Frost, Adam L
2012-10-01
Two experiments evaluated the ability of younger and older adults to visually discriminate 3-D shape as a function of surface coherence. The coherence was manipulated by embedding the 3-D surfaces in volumetric noise (e.g., for a 55 % coherent surface, 55 % of the stimulus points fell on a 3-D surface, while 45 % of the points occupied random locations within the same volume of space). The 3-D surfaces were defined by static binocular disparity, dynamic binocular disparity, and motion. The results of both experiments demonstrated significant effects of age: Older adults required more coherence (tolerated volumetric noise less) for reliable shape discrimination than did younger adults. Motion-defined and static-binocular-disparity-defined surfaces resulted in similar coherence thresholds. However, performance for dynamic-binocular-disparity-defined surfaces was superior (i.e., the observers' surface coherence thresholds were lowest for these stimuli). The results of both experiments showed that younger and older adults possess considerable tolerance to the disrupting effects of volumetric noise; the observers could reliably discriminate 3-D surface shape even when 45 % of the stimulus points (or more) constituted noise.
Sex Differences in Response to Visual Sexual Stimuli: A Review
Rupp, Heather A.; Wallen, Kim
2009-01-01
This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311
Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images
Funahashi, Shintaro
2016-01-01
Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424
Spectral fingerprints of large-scale cortical dynamics during ambiguous motion perception.
Helfrich, Randolph F; Knepper, Hannah; Nolte, Guido; Sengelmann, Malte; König, Peter; Schneider, Till R; Engel, Andreas K
2016-11-01
Ambiguous stimuli have been widely used to study the neuronal correlates of consciousness. Recently, it has been suggested that conscious perception might arise from the dynamic interplay of functionally specialized but widely distributed cortical areas. While previous research mainly focused on phase coupling as a correlate of cortical communication, more recent findings indicated that additional coupling modes might coexist and possibly subserve distinct cortical functions. Here, we studied two coupling modes, namely phase and envelope coupling, which might differ in their origins, putative functions and dynamics. Therefore, we recorded 128-channel EEG while participants performed a bistable motion task and utilized state-of-the-art source-space connectivity analysis techniques to study the functional relevance of different coupling modes for cortical communication. Our results indicate that gamma-band phase coupling in extrastriate visual cortex might mediate the integration of visual tokens into a moving stimulus during ambiguous visual stimulation. Furthermore, our results suggest that long-range fronto-occipital gamma-band envelope coupling sustains the horizontal percept during ambiguous motion perception. Additionally, our results support the idea that local parieto-occipital alpha-band phase coupling controls the inter-hemispheric information transfer. These findings provide correlative evidence for the notion that synchronized oscillatory brain activity reflects the processing of sensory input as well as the information integration across several spatiotemporal scales. The results indicate that distinct coupling modes are involved in different cortical computations and that the rich spatiotemporal correlation structure of the brain might constitute the functional architecture for cortical processing and specific multi-site communication. Hum Brain Mapp 37:4099-4111, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Zhang, Lina; Zhang, Hui; Liu, Mei; Dong, Bin
2016-06-22
In this paper, we report a polymer-based raspberry-like micromotor. Interestingly, the resulting micromotor exhibits multistimuli-responsive motion behavior. Its on-off-on motion can be regulated by the application of stimuli such as H2O2, near-infrared light, NH3, or their combinations. Because of the versatility in motion control, the current micromotor has great potential in the application field of logic gate and logic circuit. With use of different stimuli as the inputs and the micromotor motion as the output, reprogrammable OR and INHIBIT logic gates or logic circuit consisting of OR, NOT, and AND logic gates can be achieved.
The specificity of cortical region KO to depth structure.
Tyler, Christopher W; Likova, Lora T; Kontsevich, Leonid L; Wade, Alex R
2006-03-01
Functional MRI studies have identified a cortical region designated as KO between retinotopic areas V3A/B and motion area V5 in human cortex as particularly responsive to motion-defined or kinetic borders. To determine the response of the KO region to more general aspects of structure, we used stereoscopic depth borders and disparate planes with no borders, together with three stimulus types that evoked no depth percept: luminance borders, line contours and illusory phase borders. Responses to these stimuli in the KO region were compared with the responses in retinotopically defined areas that have been variously associated with disparity processing in neurophysiological and fMRI studies. The strongest responses in the KO region were to stimuli evoking perceived depth structure from either disparity or motion cues, but it showed negligible responses either to luminance-based contour stimuli or to edgeless disparity stimuli. We conclude that the region designated as KO is best regarded as a primary center for the generic representation of depth structure rather than any kind of contour specificity.
Differences in apparent straightness of dot and line stimuli.
NASA Technical Reports Server (NTRS)
Parlee, M. B.
1972-01-01
An investigation has been made of anisotropic responses to contoured and noncontoured stimuli to obtain an insight into the way these stimuli are processed. For this purpose, eight subjects judged the alignment of minimally contoured (3 dot) and contoured (line) stimuli. Stimuli, presented to each eye separately, vertically subtended either 8 or 32 deg visual angle and were located 10 deg left, center, or 10 deg right in the visual field. Location-dependent deviations from physical straightness were larger for dot stimuli than for lines. The results were the same for the two eyes. In a second experiment, subjects judged the alignment of stimuli composed of different densities of dots. Apparent straightness for these stimuli was the same as for lines. The results are discussed in terms of alternative mechanisms for analysis of contoured and minimally contoured stimuli.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice
2018-02-01
Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.
Kokubu, Masahiro; Ando, Soichi; Oda, Shingo
2018-01-18
The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro
2013-01-01
Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.
Space-time wiring specificity supports direction selectivity in the retina
Zlateski, Aleksandar; Lee, Kisuk; Richardson, Mark; Turaga, Srinivas C.; Purcaro, Michael; Balkam, Matthew; Robinson, Amy; Behabadi, Bardia F.; Campos, Michael; Denk, Winfried; Seung, H. Sebastian
2014-01-01
How does the mammalian retina detect motion? This classic problem in visual neuroscience has remained unsolved for 50 years. In search of clues, we reconstructed Off-type starburst amacrine cells (SACs) and bipolar cells (BCs) in serial electron microscopic images with help from EyeWire, an online community of “citizen neuroscientists.” Based on quantitative analyses of contact area and branch depth in the retina, we found evidence that one BC type prefers to wire with a SAC dendrite near the SAC soma, while another BC type prefers to wire far from the soma. The near type is known to lag the far type in time of visual response. A mathematical model shows how such “space-time wiring specificity” could endow SAC dendrites with receptive fields that are oriented in space-time and therefore respond selectively to stimuli that move in the outward direction from the soma. PMID:24805243
Space-time wiring specificity supports direction selectivity in the retina.
Kim, Jinseop S; Greene, Matthew J; Zlateski, Aleksandar; Lee, Kisuk; Richardson, Mark; Turaga, Srinivas C; Purcaro, Michael; Balkam, Matthew; Robinson, Amy; Behabadi, Bardia F; Campos, Michael; Denk, Winfried; Seung, H Sebastian
2014-05-15
How does the mammalian retina detect motion? This classic problem in visual neuroscience has remained unsolved for 50 years. In search of clues, here we reconstruct Off-type starburst amacrine cells (SACs) and bipolar cells (BCs) in serial electron microscopic images with help from EyeWire, an online community of 'citizen neuroscientists'. On the basis of quantitative analyses of contact area and branch depth in the retina, we find evidence that one BC type prefers to wire with a SAC dendrite near the SAC soma, whereas another BC type prefers to wire far from the soma. The near type is known to lag the far type in time of visual response. A mathematical model shows how such 'space-time wiring specificity' could endow SAC dendrites with receptive fields that are oriented in space-time and therefore respond selectively to stimuli that move in the outward direction from the soma.