Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R
2011-01-01
The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.
Enhancing Learning from Dynamic and Static Visualizations by Means of Cueing
ERIC Educational Resources Information Center
Kuhl, Tim; Scheiter, Katharina; Gerjets, Peter
2012-01-01
The current study investigated whether learning from dynamic and two presentation formats for static visualizations can be enhanced by means of cueing. One hundred and fifty university students were randomly assigned to six conditions, resulting from a 2x3-design, with cueing (with/without) and type of visualization (dynamic, static-sequential,…
ERIC Educational Resources Information Center
Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred
2016-01-01
Several theorists believe that different types of visual cues influence cognition and behavior through learned associations; however, research provides inconsistent results. Considering this, a quasi-experimental study was done to determine if there are significant positive effects of visual cues (color blue) and to identify if a positive increase…
Can Short Duration Visual Cues Influence Students' Reasoning and Eye Movements in Physics Problems?
ERIC Educational Resources Information Center
Madsen, Adrian; Rouinfar, Amy; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay
2013-01-01
We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the…
Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues
ERIC Educational Resources Information Center
Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.
2009-01-01
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Visual/motion cue mismatch in a coordinated roll maneuver
NASA Technical Reports Server (NTRS)
Shirachi, D. K.; Shirley, R. S.
1981-01-01
The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.
NASA Technical Reports Server (NTRS)
Young, L. R.
1976-01-01
Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.
Thurman, Steven M; Lu, Hongjing
2014-01-01
Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.
The time course of protecting a visual memory representation from perceptual interference
van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian
2015-01-01
Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555
Cue competition affects temporal dynamics of edge-assignment in human visual cortex.
Brooks, Joseph L; Palmer, Stephen E
2011-03-01
Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.
Low-level visual attention and its relation to joint attention in autism spectrum disorder.
Jaworski, Jessica L Bean; Eigsti, Inge-Marie
2017-04-01
Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.
Dynamic sound localization in cats
Ruhland, Janet L.; Jones, Amy E.
2015-01-01
Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Dynamics of the spatial scale of visual attention revealed by brain event-related potentials
NASA Technical Reports Server (NTRS)
Luo, Y. J.; Greenwood, P. M.; Parasuraman, R.
2001-01-01
The temporal dynamics of the spatial scaling of attention during visual search were examined by recording event-related potentials (ERPs). A total of 16 young participants performed a search task in which the search array was preceded by valid cues that varied in size and hence in precision of target localization. The effects of cue size on short-latency (P1 and N1) ERP components, and the time course of these effects with variation in cue-target stimulus onset asynchrony (SOA), were examined. Reaction time (RT) to discriminate a target was prolonged as cue size increased. The amplitudes of the posterior P1 and N1 components of the ERP evoked by the search array were affected in opposite ways by the size of the precue: P1 amplitude increased whereas N1 amplitude decreased as cue size increased, particularly following the shortest SOA. The results show that when top-down information about the region to be searched is less precise (larger cues), RT is slowed and the neural generators of P1 become more active, reflecting the additional computations required in changing the spatial scale of attention to the appropriate element size to facilitate target discrimination. In contrast, the decrease in N1 amplitude with cue size may reflect a broadening of the spatial gradient of attention. The results provide electrophysiological evidence that changes in the spatial scale of attention modulate neural activity in early visual cortical areas and activate at least two temporally overlapping component processes during visual search.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
Motion cue effects on human pilot dynamics in manual control
NASA Technical Reports Server (NTRS)
Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.
1977-01-01
Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.
Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf
2017-01-01
In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.
Differentiating Visual from Response Sequencing during Long-term Skill Learning.
Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy
2017-01-01
The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.
Mission Driven Scene Understanding: Dynamic Environments
2016-06-01
the Army mission. Then, for example, helpful image cues that relate to mission activities may include time of day, current and future weather...mission.10 In other words, visual saliency also can be used to highlight key image cues that relate to Army mission activities.10 For example, an...to the Army mission. Then, for example, helpful image cues that relate to mission activities may include time of day, current and future weather
Neural dynamics for landmark orientation and angular path integration
Seelig, Johannes D.; Jayaraman, Vivek
2015-01-01
Summary Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed flies walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body — a toroidal structure in the center of the fly brain. The population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity — a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors — network structures proposed to support the function of navigational brain circuits. PMID:25971509
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
The use of visual cues for vehicle control and navigation
NASA Technical Reports Server (NTRS)
Hart, Sandra G.; Battiste, Vernol
1991-01-01
At least three levels of control are required to operate most vehicles: (1) inner-loop control to counteract the momentary effects of disturbances on vehicle position; (2) intermittent maneuvers to avoid obstacles, and (3) outer-loop control to maintain a planned route. Operators monitor dynamic optical relationships in their immediate surroundings to estimate momentary changes in forward, lateral, and vertical position, rates of change in speed and direction of motion, and distance from obstacles. The process of searching the external scene to find landmarks (for navigation) is intermittent and deliberate, while monitoring and responding to subtle changes in the visual scene (for vehicle control) is relatively continuous and 'automatic'. However, since operators may perform both tasks simultaneously, the dynamic optical cues available for a vehicle control task may be determined by the operator's direction of gaze for wayfinding. An attempt to relate the visual processes involved in vehicle control and wayfinding is presented. The frames of reference and information used by different operators (e.g., automobile drivers, airline pilots, and helicopter pilots) are reviewed with particular emphasis on the special problems encountered by helicopter pilots flying nap of the earth (NOE). The goal of this overview is to describe the context within which different vehicle control tasks are performed and to suggest ways in which the use of visual cues for geographical orientation might influence visually guided control activities.
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.
Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina
2018-05-23
Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.
Venter, Jan A; Prins, Herbert H T; Mashanova, Alla; Slotow, Rob
2017-01-01
Finding suitable forage patches in a heterogeneous landscape, where patches change dynamically both spatially and temporally could be challenging to large herbivores, especially if they have no a priori knowledge of the location of the patches. We tested whether three large grazing herbivores with a variety of different traits improve their efficiency when foraging at a heterogeneous habitat patch scale by using visual cues to gain a priori knowledge about potential higher value foraging patches. For each species (zebra ( Equus burchelli ), red hartebeest ( Alcelaphus buselaphus subspecies camaa ) and eland ( Tragelaphus oryx )), we used step lengths and directionality of movement to infer whether they were using visual cues to find suitable forage patches at a habitat patch scale. Step lengths were significantly longer for all species when moving to non-visible patches than to visible patches, but all movements showed little directionality. Of the three species, zebra movements were the most directional. Red hartebeest had the shortest step lengths and zebra the longest. We conclude that these large grazing herbivores may not exclusively use visual cues when foraging at a habitat patch scale, but would rather adapt their movement behaviour, mainly step length, to the heterogeneity of the specific landscape.
Deception Detection in Multicultural Coalitions: Foundations for a Cognitive Model
2011-06-01
and spontaneous vs. deliberate and contrived facial expression of emotions , symmetry, leakage through microexpressions, hand postures, dynamic...sequences of visually detectable cues , such as facial muscle-group coordination and correlations expressed as changes in facial expressions and face...concert, whereas facial expressions of deceivers emphasize a few cues that arise more randomly and chaotically [15]. A smile without the use of
NASA Astrophysics Data System (ADS)
Viertler, Franz; Hajek, Manfred
2015-05-01
To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.
2012-01-01
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585
Gait parameter control timing with dynamic manual contact or visual cues
Shi, Peter; Werner, William
2016-01-01
We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. PMID:26936979
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
NASA Technical Reports Server (NTRS)
Khan, M. Javed; Rossi, Marcia; Heath, Bruce; Ali, Syed F.; Ward, Marcus
2006-01-01
The effects of out-of-the-window cues on learning a straight-in landing approach and a level 360deg turn by novice pilots on a flight simulator have been investigated. The treatments consisted of training with and without visual cues as well as density of visual cues. The performance of the participants was then evaluated through similar but more challenging tasks. It was observed that the participants in the landing study who trained with visual cues performed poorly than those who trained without the cues. However the performance of those who trained with a faded-cues sequence performed slightly better than those who trained without visual cues. In the level turn study it was observed that those who trained with the visual cues performed better than those who trained without visual cues. The study also showed that those participants who trained with a lower density of cues performed better than those who trained with a higher density of visual cues.
Multiperson visual focus of attention from head pose and meeting contextual cues.
Ba, Sileye O; Odobez, Jean-Marc
2011-01-01
This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.
[Visual cuing effect for haptic angle judgment].
Era, Ataru; Yokosawa, Kazuhiko
2009-08-01
We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.
Electrophysiological evidence for Audio-visuo-lingual speech integration.
Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc
2018-01-31
Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
Harrison, Neil R; Woodhouse, Rob
2016-05-01
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.; Zaitzeff, L. P.; Berge, W. A.
1972-01-01
Flight control and procedural task skill degradation, and the effectiveness of retraining methods were evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Fifteen experienced pilots were trained and then tested after 4 months either without the benefits of practice or with static rehearsal, dynamic rehearsal or with dynamic warmup practice. Performance on both the flight control and procedure tasks degraded significantly after 4 months. The rehearsal methods effectively countered procedure task skill degradation, while dynamic rehearsal or a combination of static rehearsal and dynamic warmup practice was required for the flight control tasks. The quality of the retraining methods appeared to be primarily dependent on the efficiency of visual cue reinforcement.
Bandwidth and SIMDUCE as simulator fidelity criteria
NASA Technical Reports Server (NTRS)
Key, David
1992-01-01
The potential application of two concepts from the new Handling Qualities Specification for Military Rotorcraft was discussed. The first concept is bandwidth, a measure of the dynamic response to control. The second is a qualitative technique developed for assessing the visual cue environment the pilot has in bad weather and at night. Simulated Day Usable Cue Environment (SIMDUCE) applies this concept to assessing the day cuing fidelity in the simulator.
Should visual speech cues (speechreading) be considered when fitting hearing aids?
NASA Astrophysics Data System (ADS)
Grant, Ken
2002-05-01
When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Hierarchical acquisition of visual specificity in spatial contextual cueing.
Lie, Kin-Pou
2015-01-01
Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.
Shuttle vehicle and mission simulation requirements report, volume 1
NASA Technical Reports Server (NTRS)
Burke, J. F.
1972-01-01
The requirements for the space shuttle vehicle and mission simulation are developed to analyze the systems, mission, operations, and interfaces. The requirements are developed according to the following subject areas: (1) mission envelope, (2) orbit flight dynamics, (3) shuttle vehicle systems, (4) external interfaces, (5) crew procedures, (6) crew station, (7) visual cues, and (8) aural cues. Line drawings and diagrams of the space shuttle are included to explain the various systems and components.
Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.
Kim, Jeesun; Davis, Chris; Groot, Christopher
2009-12-01
This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.
The effects of sequential attention shifts within visual working memory.
Li, Qi; Saiki, Jun
2014-01-01
Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM) representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: (1) orienting attention to one location, and then withdrawing attention from it, and (2) switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed equivalent benefits of the first and second spatial cues, suggesting that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first-cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance was still better at first-cued locations than at uncued and neutral locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance.
The Role of Visual Cues in Microgravity Spatial Orientation
NASA Technical Reports Server (NTRS)
Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.
2003-01-01
In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.
Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.
Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne
2016-05-01
We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.
Gait parameter control timing with dynamic manual contact or visual cues.
Rabin, Ely; Shi, Peter; Werner, William
2016-06-01
We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. Copyright © 2016 the American Physiological Society.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Visual Cues, Verbal Cues and Child Development
ERIC Educational Resources Information Center
Valentini, Nadia
2004-01-01
In this article, the author discusses two strategies--visual cues (modeling) and verbal cues (short, accurate phrases) which are related to teaching motor skills in maximizing learning in physical education classes. Both visual and verbal cues are strong influences in facilitating and promoting day-to-day learning. Both strategies reinforce…
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
2018-04-01
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Temporal dynamics of encoding, storage and reallocation of visual working memory
Bays, Paul M; Gorgoraptis, Nikos; Wee, Natalie; Marshall, Louise; Husain, Masud
2012-01-01
The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here we examine the temporal evolution of memory resolution, based on observers’ ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory, and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cueing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event, but was maintained if it indicated an object of particular relevance to the task. These cueing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information. PMID:21911739
Social vision: sustained perceptual enhancement of affective facial cues in social anxiety
McTeague, Lisa M.; Shumen, Joshua R.; Wieser, Matthias J.; Lang, Peter J.; Keil, Andreas
2010-01-01
Heightened perception of facial cues is at the core of many theories of social behavior and its disorders. In the present study, we continuously measured electrocortical dynamics in human visual cortex, as evoked by happy, neutral, fearful, and angry faces. Thirty-seven participants endorsing high versus low generalized social anxiety (upper and lower tertiles of 2,104 screened undergraduates) viewed naturalistic faces flickering at 17.5 Hz to evoke steady-state visual evoked potentials (ssVEPs), recorded from 129 scalp electrodes. Electrophysiological data were evaluated in the time-frequency domain after linear source space projection using the minimum norm method. Source estimation indicated an early visual cortical origin of the face-evoked ssVEP, which showed sustained amplitude enhancement for emotional expressions specifically in individuals with pervasive social anxiety. Participants in the low symptom group showed no such sensitivity, and a correlational analysis across the entire sample revealed a strong relationship between self-reported interpersonal anxiety/avoidance and enhanced visual cortical response amplitude for emotional, versus neutral expressions. This pattern was maintained across the 3500 ms viewing epoch, suggesting that temporally sustained, heightened perceptual bias towards affective facial cues is associated with generalized social anxiety. PMID:20832490
Circadian timed episodic-like memory - a bee knows what to do when, and also where.
Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu
2007-10-01
This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.
Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.
Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E
2017-11-06
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.
Saccadic eye movements do not disrupt the deployment of feature-based attention.
Kalogeropoulou, Zampeta; Rolfs, Martin
2017-07-01
The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.
2008-05-01
AFRL-RH-WP-SR-2009-0002 The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task...and Performance in a Roll-axis Compensatory Tracking Task Setting 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62202F 6. AUTHOR(S...simulated vehicle having aircraft-like dynamics. A centrally located compensatory display, subtending about nine degrees, provided visual roll error
Comprehension of human pointing gestures in horses (Equus caballus).
Maros, Katalin; Gácsi, Márta; Miklósi, Adám
2008-07-01
Twenty domestic horses (Equus caballus) were tested for their ability to rely on different human gesticular cues in a two-way object choice task. An experimenter hid food under one of two bowls and after baiting, indicated the location of the food to the subjects by using one of four different cues. Horses could locate the hidden reward on the basis of the distal dynamic-sustained, proximal momentary and proximal dynamic-sustained pointing gestures but failed to perform above chance level when the experimenter performed a distal momentary pointing gesture. The results revealed that horses could rely spontaneously on those cues that could have a stimulus or local enhancement effect, but the possible comprehension of the distal momentary pointing remained unclear. The results are discussed with reference to the involvement of various factors such as predisposition to read human visual cues, the effect of domestication and extensive social experience and the nature of the gesture used by the experimenter in comparative investigations.
A Microsaccadic Account of Attentional Capture and Inhibition of Return in Posner Cueing
Tian, Xiaoguang; Yoshida, Masatoshi; Hafed, Ziad M.
2016-01-01
Microsaccades exhibit systematic oscillations in direction after spatial cueing, and these oscillations correlate with facilitatory and inhibitory changes in behavioral performance in the same tasks. However, independent of cueing, facilitatory and inhibitory changes in visual sensitivity also arise pre-microsaccadically. Given such pre-microsaccadic modulation, an imperative question to ask becomes: how much of task performance in spatial cueing may be attributable to these peri-movement changes in visual sensitivity? To investigate this question, we adopted a theoretical approach. We developed a minimalist model in which: (1) microsaccades are repetitively generated using a rise-to-threshold mechanism, and (2) pre-microsaccadic target onset is associated with direction-dependent modulation of visual sensitivity, as found experimentally. We asked whether such a model alone is sufficient to account for performance dynamics in spatial cueing. Our model not only explained fine-scale microsaccade frequency and direction modulations after spatial cueing, but it also generated classic facilitatory (i.e., attentional capture) and inhibitory [i.e., inhibition of return (IOR)] effects of the cue on behavioral performance. According to the model, cues reflexively reset the oculomotor system, which unmasks oscillatory processes underlying microsaccade generation; once these oscillatory processes are unmasked, “attentional capture” and “IOR” become direct outcomes of pre-microsaccadic enhancement or suppression, respectively. Interestingly, our model predicted that facilitatory and inhibitory effects on behavior should appear as a function of target onset relative to microsaccades even without prior cues. We experimentally validated this prediction for both saccadic and manual responses. We also established a potential causal mechanism for the microsaccadic oscillatory processes hypothesized by our model. We used retinal-image stabilization to experimentally control instantaneous foveal motor error during the presentation of peripheral cues, and we found that post-cue microsaccadic oscillations were severely disrupted. This suggests that microsaccades in spatial cueing tasks reflect active oculomotor correction of foveal motor error, rather than presumed oscillatory covert attentional processes. Taken together, our results demonstrate that peri-microsaccadic changes in vision can go a long way in accounting for some classic behavioral phenomena. PMID:27013991
Reducing Visual Discomfort with HMDs Using Dynamic Depth of Field.
Carnegie, Kieran; Rhee, Taehyun
2015-01-01
Although head-mounted displays (HMDs) are ideal devices for personal viewing of immersive stereoscopic content, exposure to VR applications on them results in significant discomfort for the majority of people, with symptoms including eye fatigue, headaches, nausea, and sweating. A conflict between accommodation and vergence depth cues on stereoscopic displays is a significant cause of visual discomfort. This article describes the results of an evaluation used to judge the effectiveness of dynamic depth-of-field (DoF) blur in an effort to reduce discomfort caused by exposure to stereoscopic content on HMDs. Using a commercial game engine implementation, study participants report a reduction of visual discomfort on a simulator sickness questionnaire when DoF blurring is enabled. The study participants reported a decrease in symptom severity caused by HMD exposure, indicating that dynamic DoF can effectively reduce visual discomfort.
Zabierek, Kristina C; Gabor, Caitlin R
2016-09-01
Prey may use multiple sensory channels to detect predators, whose cues may differ in altered sensory environments, such as turbid conditions. Depending on the environment, prey may use cues in an additive/complementary manner or in a compensatory manner. First, to determine whether the purely aquatic Barton Springs salamander, Eurycea sosorum, show an antipredator response to visual cues, we examined their activity when exposed to either visual cues of a predatory fish (Lepomis cyanellus) or a non-predatory fish (Etheostoma lepidum). Salamanders decreased activity in response to predator visual cues only. Then, we examined the antipredator response of these salamanders to all matched and mismatched combinations of chemical and visual cues of the same predatory and non-predatory fish in clear and low turbidity conditions. Salamanders decreased activity in response to predator chemical cues matched with predator visual cues or mismatched with non-predator visual cues. Salamanders also increased latency to first move to predator chemical cues mismatched with non-predator visual cues. Salamanders decreased activity and increased latency to first move more in clear as opposed to turbid conditions in all treatment combinations. Our results indicate that salamanders under all conditions and treatments preferentially rely on chemical cues to determine antipredator behavior, although visual cues are potentially utilized in conjunction for latency to first move. Our results also have potential conservation implications, as decreased antipredator behavior was seen in turbid conditions. These results reveal complexity of antipredator behavior in response to multiple cues under different environmental conditions, which is especially important when considering endangered species. Copyright © 2016 Elsevier B.V. All rights reserved.
The effects of sequential attention shifts within visual working memory
Li, Qi; Saiki, Jun
2014-01-01
Previous studies have shown conflicting data as to whether it is possible to sequentially shift spatial attention among visual working memory (VWM) representations. The present study investigated this issue by asynchronously presenting attentional cues during the retention interval of a change detection task. In particular, we focused on two types of sequential attention shifts: (1) orienting attention to one location, and then withdrawing attention from it, and (2) switching the focus of attention from one location to another. In Experiment 1, a withdrawal cue was presented after a spatial retro-cue to measure the effect of withdrawing attention. The withdrawal cue significantly reduced the cost of invalid spatial cues, but surprisingly, did not attenuate the benefit of valid spatial cues. This indicates that the withdrawal cue only triggered the activation of facilitative components but not inhibitory components of attention. In Experiment 2, two spatial retro-cues were presented successively to examine the effect of switching the focus of attention. We observed equivalent benefits of the first and second spatial cues, suggesting that participants were able to reorient attention from one location to another within VWM, and the reallocation of attention did not attenuate memory at the first-cued location. In Experiment 3, we found that reducing the validity of the preceding spatial cue did lead to a significant reduction in its benefit. However, performance was still better at first-cued locations than at uncued and neutral locations, indicating that the first cue benefit might have been preserved both partially under automatic control and partially under voluntary control. Our findings revealed new properties of dynamic attentional control in VWM maintenance. PMID:25237306
Visual cue-specific craving is diminished in stressed smokers.
Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R
2017-09-01
Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
2016-06-01
Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
The effect of contextual sound cues on visual fidelity perception.
Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam
2014-01-01
Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E
2016-08-19
Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.
Directed Forgetting and Directed Remembering in Visual Working Memory
Williams, Melonie; Woodman, Geoffrey F.
2013-01-01
A defining characteristic of visual working memory is its limited capacity. This means that it is crucial to maintain only the most relevant information in visual working memory. However, empirical research is mixed as to whether it is possible to selectively maintain a subset of the information previously encoded into visual working memory. Here we examined the ability of subjects to use cues to either forget or remember a subset of the information already stored in visual working memory. In Experiment 1, participants were cued to either forget or remember one of two groups of colored squares during a change-detection task. We found that both types of cues aided performance in the visual working memory task, but that observers benefited more from a cue to remember than a cue to forget a subset of the objects. In Experiment 2, we show that the previous findings, which indicated that directed-forgetting cues are ineffective, were likely due to the presence of invalid cues that appear to cause observers to disregard such cues as unreliable. In Experiment 3, we recorded event-related potentials (ERPs) and show that an electrophysiological index of focused maintenance is elicited by cues that indicate which subset of information in visual working memory needs to be remembered, ruling out alternative explanations of the behavioral effects of retention-interval cues. The present findings demonstrate that observers can focus maintenance mechanisms on specific objects in visual working memory based on cues indicating future task relevance. PMID:22409182
Bock, Otmar; Bury, Nils
2018-03-01
Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
Temporal and peripheral extraction of contextual cues from scenes during visual search.
Koehler, Kathryn; Eckstein, Miguel P
2017-02-01
Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Lin, Zhicheng
2013-11-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.
Lin, Zhicheng
2013-01-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, human performance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. PMID:23942348
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
Olfactory-visual integration facilitates perception of subthreshold negative emotion.
Novak, Lucas R; Gitelman, Darren R; Schuyler, Brianna; Li, Wen
2015-10-01
A fast growing literature of multisensory emotion integration notwithstanding, the chemical senses, intimately associated with emotion, have been largely overlooked. Moreover, an ecologically highly relevant principle of "inverse effectiveness", rendering maximal integration efficacy with impoverished sensory input, remains to be assessed in emotion integration. Presenting minute, subthreshold negative (vs. neutral) cues in faces and odors, we demonstrated olfactory-visual emotion integration in improved emotion detection (especially among individuals with weaker perception of unimodal negative cues) and response enhancement in the amygdala. Moreover, while perceptual gain for visual negative emotion involved the posterior superior temporal sulcus/pSTS, perceptual gain for olfactory negative emotion engaged both the associative olfactory (orbitofrontal) cortex and amygdala. Dynamic causal modeling (DCM) analysis of fMRI timeseries further revealed connectivity strengthening among these areas during crossmodal emotion integration. That multisensory (but not low-level unisensory) areas exhibited both enhanced response and region-to-region coupling favors a top-down (vs. bottom-up) account for olfactory-visual emotion integration. Current findings thus confirm the involvement of multisensory convergence areas, while highlighting unique characteristics of olfaction-related integration. Furthermore, successful crossmodal binding of subthreshold aversive cues not only supports the principle of "inverse effectiveness" in emotion integration but also accentuates the automatic, unconscious quality of crossmodal emotion synthesis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.
2014-01-01
Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID:25528113
NASA Technical Reports Server (NTRS)
Parris, B. L.; Cook, A. M.
1978-01-01
Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.
Manual control of yaw motion with combined visual and vestibular cues
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1977-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Bowles, R. L.
1983-01-01
This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.
Media/Device Configurations for Platoon Leader Tactical Training
1985-02-01
munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1981-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C
2011-07-07
Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident.
Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.
2011-01-01
Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident. PMID:21123269
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults
Cortese, Bernadette M.; Uhde, Thomas W.; Brady, Kathleen T.; McClernon, F. Joseph; Yang, Qing X.; Collins, Heather R.; LeMatty, Todd; Hartwell, Karen J.
2015-01-01
Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor + picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multi-sensory, but not unisensory cues, was significantly related to participants’ level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. PMID:26475784
The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults.
Cortese, Bernadette M; Uhde, Thomas W; Brady, Kathleen T; McClernon, F Joseph; Yang, Qing X; Collins, Heather R; LeMatty, Todd; Hartwell, Karen J
2015-12-30
Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor+picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multisensory, but not unisensory cues, was significantly related to participants' level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Clark, Gavin I; Rock, Adam J; McKeith, Charles F A; Coventry, William L
2017-09-01
Poker-machine gamblers have been demonstrated to report increases in the urge to gamble following exposure to salient gambling cues. However, the processes which contribute to this urge to gamble remain to be understood. The present study aimed to investigate whether changes in the conscious experience of visual imagery, rationality and volitional control (over one's thoughts, images and attention) predicted changes in the urge to gamble following exposure to a gambling cue. Thirty-one regular poker-machine gamblers who reported at least low levels of problem gambling on the Problem Gambling Severity Index (PGSI), were recruited to complete an online cue-reactivity experiment. Participants completed the PGSI, the visual imagery, rationality and volitional control subscales of the Phenomenology of Consciousness Inventory (PCI), and a visual analogue scale (VAS) assessing urge to gamble. Participants completed the PCI subscales and VAS at baseline, following a neutral video cue and following a gambling video cue. Urge to gamble was found to significantly increase from neutral cue to gambling cue (while controlling for baseline urge) and this increase was predicted by PGSI score. After accounting for the effects of problem-gambling severity, cue-reactive visual imagery, rationality and volitional control significantly improved the prediction of cue-reactive urge to gamble. The small sample size and limited participant characteristic data restricts the generalizability of the findings. Nevertheless, this is the first study to demonstrate that changes in the subjective experience of visual imagery, volitional control and rationality predict changes in the urge to gamble from neutral to gambling cue. The results suggest that visual imagery, rationality and volitional control may play an important role in the experience of the urge to gamble in poker-machine gamblers.
ERIC Educational Resources Information Center
Krahmer, Emiel; Swerts, Marc
2007-01-01
Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…
Modeling of Depth Cue Integration in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.; Davis, Wendy
2003-01-01
Psychophysical research has demonstrated that human observers utilize a variety of visual cues to form a perception of three-dimensional depth. However, most of these studies have utilized a passive judgement paradigm, and failed to consider depth-cue integration as a dynamic and task-specific process. In the current study, we developed and experimentally validated a model of manual control of depth that examines how two potential cues (stereo disparity and relative size) are utilized in both first- and second-order active depth control tasks. We found that stereo disparity plays the dominate role for determining depth position, while relative size dominates perception of depth velocity. Stereo disparity also plays a reduced role when made less salient (i.e., when viewing distance is increased). Manual control models predict that position information is sufficient for first-order control tasks, while velocity information is required to perform a second-order control task. Thus, the rules for depth-cue integration in active control tasks are dependent on both task demands and cue quality.
Visual Features Involving Motion Seen from Airport Control Towers
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion
2010-01-01
Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.
Allen, Justine J; Mäthger, Lydia M; Barbosa, Alexandra; Buresch, Kendra C; Sogin, Emilia; Schwartz, Jillian; Chubb, Charles; Hanlon, Roger T
2010-04-07
Prey camouflage is an evolutionary response to predation pressure. Cephalopods have extensive camouflage capabilities and studying them can offer insight into effective camouflage design. Here, we examine whether cuttlefish, Sepia officinalis, show substrate or camouflage pattern preferences. In the first two experiments, cuttlefish were presented with a choice between different artificial substrates or between different natural substrates. First, the ability of cuttlefish to show substrate preference on artificial and natural substrates was established. Next, cuttlefish were offered substrates known to evoke three main camouflage body pattern types these animals show: Uniform or Mottle (function by background matching); or Disruptive. In a third experiment, cuttlefish were presented with conflicting visual cues on their left and right sides to assess their camouflage response. Given a choice between substrates they might encounter in nature, we found no strong substrate preference except when cuttlefish could bury themselves. Additionally, cuttlefish responded to conflicting visual cues with mixed body patterns in both the substrate preference and split substrate experiments. These results suggest that differences in energy costs for different camouflage body patterns may be minor and that pattern mixing and symmetry may play important roles in camouflage.
Perceptual transparency from image deformation.
Kawabe, Takahiro; Maruya, Kazushi; Nishida, Shin'ya
2015-08-18
Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.
Effects of False Tilt Cues on the Training of Manual Roll Control Skills
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Popovici, Alexandru; Zavala, Melinda A.
2015-01-01
This paper describes a transfer-of-training study performed in the NASA Ames Vertica lMotion Simulator. The purpose of the study was to investigate the effect of false tilt cues on training and transfer of training of manual roll control skills. Of specific interest were the skills needed to control unstable roll dynamics of a mid-size transport aircraft close to the stall point. Nineteen general aviation pilots trained on a roll control task with one of three motion conditions: no motion, roll motion only, or reduced coordinated roll motion. All pilots transferred to full coordinated roll motion in the transfer session. A novel multimodal pilot model identification technique was successfully applied to characterize how pilots' use of visual and motion cues changed over the course of training and after transfer. Pilots who trained with uncoordinated roll motion had significantly higher performance during training and after transfer, even though they experienced the false tilt cues. Furthermore, pilot control behavior significantly changed during the two sessions, as indicated by increasing visual and motion gains, and decreasing lead time constants. Pilots training without motion showed higher learning rates after transfer to the full coordinated roll motion case.
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Benolken, Martha S.
1993-01-01
The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
Changes in search rate but not in the dynamics of exogenous attention in action videogame players.
Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne
2011-11-01
Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.
The effect of contextual cues on the encoding of motor memories.
Howard, Ian S; Wolpert, Daniel M; Franklin, David W
2013-05-01
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Domain general learning: Infants use social and non-social cues when learning object statistics
Barry, Ryan A.; Graf Estes, Katharine; Rivera, Susan M.
2015-01-01
Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants’ attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions. PMID:25999879
Sensory Cues, Visualization and Physics Learning
ERIC Educational Resources Information Center
Reiner, Miriam
2009-01-01
Bodily manipulations, such as juggling, suggest a well-synchronized physical interaction as if the person were a physics expert. The juggler uses "knowledge" that is rooted in bodily experience, to interact with the environment. Such enacted bodily knowledge is powerful, efficient, predictive, and relates to sensory perception of the dynamics of…
Cross-Sensory Transfer of Reference Frames in Spatial Memory
ERIC Educational Resources Information Center
Kelly, Jonathan W.; Avraamides, Marios N.
2011-01-01
Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…
Attentional bias to food-related visual cues: is there a role in obesity?
Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M
2015-02-01
The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.
Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions
Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang
2012-01-01
Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749
Heimbauer, Lisa A; Antworth, Rebecca L; Owren, Michael J
2012-01-01
Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present ("positive" cuing) or absent ("negative" cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.
Differential processing of binocular and monocular gloss cues in human visual cortex
Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.
2016-01-01
The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
The Effects of Explicit Visual Cues in Reading Biological Diagrams
ERIC Educational Resources Information Center
Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua
2017-01-01
Drawing on cognitive theories, this study intends to investigate the effects of explicit visual cues which have been proposed as a critical factor in facilitating understanding of biological images. Three diagrams from Taiwanese textbooks with implicit visual cues, involving the concepts of biological classification systems, fish taxonomy, and…
Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus
Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.
2013-01-01
Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713
Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.
2013-01-01
Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535
Toward semantic-based retrieval of visual information: a model-based approach
NASA Astrophysics Data System (ADS)
Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman
2002-07-01
This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.
Visuospatial selective attention in chickens.
Sridharan, Devarajan; Ramamurthy, Deepa L; Schwarz, Jason S; Knudsen, Eric I
2014-05-13
Voluntary control of attention promotes intelligent, adaptive behaviors by enabling the selective processing of information that is most relevant for making decisions. Despite extensive research on attention in primates, the capacity for selective attention in nonprimate species has never been quantified. Here we demonstrate selective attention in chickens by applying protocols that have been used to characterize visual spatial attention in primates. Chickens were trained to localize and report the vertical position of a target in the presence of task-relevant distracters. A spatial cue, the location of which varied across individual trials, indicated the horizontal, but not vertical, position of the upcoming target. Spatial cueing improved localization performance: accuracy (d') increased and reaction times decreased in a space-specific manner. Distracters severely impaired perceptual performance, and this impairment was greatly reduced by spatial cueing. Signal detection analysis with an "indecision" model demonstrated that spatial cueing significantly increased choice certainty in localizing targets. By contrast, error-aversion certainty (certainty of not making an error) remained essentially constant across cueing protocols, target contrasts, and individuals. The results show that chickens shift spatial attention rapidly and dynamically, following principles of stimulus selection that closely parallel those documented in primates. The findings suggest that the mechanisms that control attention have been conserved through evolution, and establish chickens--a highly visual species that is easily trained and amenable to cutting-edge experimental technologies--as an attractive model for linking behavior to neural mechanisms of selective attention.
Seeing is believing: information content and behavioural response to visual and chemical cues
Gonzálvez, Francisco G.; Rodríguez-Gironés, Miguel A.
2013-01-01
Predator avoidance and foraging often pose conflicting demands. Animals can decrease mortality risk searching for predators, but searching decreases foraging time and hence intake. We used this principle to investigate how prey should use information to detect, assess and respond to predation risk from an optimal foraging perspective. A mathematical model showed that solitary bees should increase flower examination time in response to predator cues and that the rate of false alarms should be negatively correlated with the relative value of the flower explored. The predatory ant, Oecophylla smaragdina, and the harmless ant, Polyrhachis dives, differ in the profile of volatiles they emit and in their visual appearance. As predicted, the solitary bee Nomia strigata spent more time examining virgin flowers in presence of predator cues than in their absence. Furthermore, the proportion of flowers rejected decreased from morning to noon, as the relative value of virgin flowers increased. In addition, bees responded differently to visual and chemical cues. While chemical cues induced bees to search around flowers, bees detecting visual cues hovered in front of them. These strategies may allow prey to identify the nature of visual cues and to locate the source of chemical cues. PMID:23698013
Intercepting a sound without vision
Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica
2017-01-01
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939
NASA Technical Reports Server (NTRS)
Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.
1992-01-01
This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.
Myers, Nicholas E.; Walther, Lena; Wallis, George; Stokes, Mark G.; Nobre, Anna C.
2015-01-01
Working memory (WM) is strongly influenced by attention. In visual working-memory tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar fronto-parietal control network, the two are likely to exhibit some processing differences, since precues invite anticipation of upcoming information, while retrocues may guide prioritisation, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual working-memory task designed to permit a direct comparison between cueing conditions. We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information. PMID:25244118
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Benolken, M. S.
1995-01-01
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
Differential processing of binocular and monocular gloss cues in human visual cortex.
Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E
2016-06-01
The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.
Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T
2017-09-01
Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.
USDA-ARS?s Scientific Manuscript database
In June and July 2011 traps were deployed in Tuskegee National Forest, Macon County, Alabama to test the influence of chemical and visual cues on for the capture of bark and ambrosia beetles (Coleoptera: Curculionidae: Scolytinae). \\using chemical and visual cues. The first experiment investigated t...
Impact of Visual, Vocal, and Lexical Cues on Judgments of Counselor Qualities
ERIC Educational Resources Information Center
Strahan, Carole; Zytowski, Donald G.
1976-01-01
Undergraduate students (N=130) rated Carl Rogers via visual, lexical, vocal, or vocal-lexical communication channels. Lexical cues were more important in creating favorable impressions among females. Subsequent exposure to combined visual-vocal-lexical cues resulted in warmer and less distant ratings, but not on a consistent basis. (Author)
Comprehension of Infrequent Subject-Verb Agreement Forms: Evidence from French-Learning Children
ERIC Educational Resources Information Center
Legendre, Geraldine; Barriere, Isabelle; Goyet, Louise; Nazzi, Thierry
2010-01-01
Two comprehension experiments were conducted to investigate whether young French-learning children (N = 76) are able to use a single number cue in subject-verb agreement contexts and match a visually dynamic scene with a corresponding verbal stimulus. Results from both preferential looking and pointing demonstrated significant comprehension in…
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Influence of habitat degradation on fish replenishment
NASA Astrophysics Data System (ADS)
McCormick, M. I.; Moore, J. A. Y.; Munday, P. L.
2010-09-01
Temperature-induced coral bleaching is a major threat to the biodiversity of coral reef ecosystems. While reductions in species diversity and abundance of fish communities have been documented following coral bleaching, the mechanisms that underlie these changes are poorly understood. The present study examined the impacts of coral bleaching on the early life-history processes of coral reef fishes. Daily monitoring of fish settlement patterns found that ten times as many fish settled to healthy coral than sub-lethally bleached coral. Species diversity of settling fishes was least on bleached coral and greatest on dead coral, with healthy coral having intermediate levels of diversity. Laboratory experiments using light-trap caught juveniles showed that different damselfish species chose among healthy, bleached and dead coral habitats using different combinations of visual and olfactory cues. The live coral specialist, Pomacentrus moluccensis, preferred live coral and avoided bleached and dead coral, using mostly visual cues to inform their habitat choice. The habitat generalist, Pomacentrus amboinensis, also preferred live coral and avoided bleached and dead coral but selected these habitats using both visual and olfactory cues. Trials with another habitat generalist, Dischistodus sp., suggested that vision played a significant role. A 20 days field experiment that manipulated densities of P. moluccensis on healthy and bleached coral heads found an influence of fish density on juvenile weight and growth, but no significant influence of habitat quality. These results suggests that coral bleaching will affect settlement patterns and species distributions by influencing the visual and olfactory cues that reef fish larvae use to make settlement choices. Furthermore, increased fish density within the remaining healthy coral habitats could play an important role in influencing population dynamics.
Ono, Yumie; Nomoto, Yasunori; Tanaka, Shohei; Sato, Keisuke; Shimada, Sotaro; Tachibana, Atsumichi; Bronner, Shaw; Noah, J Adam
2014-01-15
We utilized the high temporal resolution of functional near-infrared spectroscopy to explore how sensory input (visual and rhythmic auditory cues) are processed in the cortical areas of multimodal integration to achieve coordinated motor output during unrestricted dance simulation gameplay. Using an open source clone of the dance simulation video game, Dance Dance Revolution, two cortical regions of interest were selected for study, the middle temporal gyrus (MTG) and the frontopolar cortex (FPC). We hypothesized that activity in the FPC would indicate top-down regulatory mechanisms of motor behavior; while that in the MTG would be sustained due to bottom-up integration of visual and auditory cues throughout the task. We also hypothesized that a correlation would exist between behavioral performance and the temporal patterns of the hemodynamic responses in these regions of interest. Results indicated that greater temporal accuracy of dance steps positively correlated with persistent activation of the MTG and with cumulative suppression of the FPC. When auditory cues were eliminated from the simulation, modifications in cortical responses were found depending on the gameplay performance. In the MTG, high-performance players showed an increase but low-performance players displayed a decrease in cumulative amount of the oxygenated hemoglobin response in the no music condition compared to that in the music condition. In the FPC, high-performance players showed relatively small variance in the activity regardless of the presence of auditory cues, while low-performance players showed larger differences in the activity between the no music and music conditions. These results suggest that the MTG plays an important role in the successful integration of visual and rhythmic cues and the FPC may work as top-down control to compensate for insufficient integrative ability of visual and rhythmic cues in the MTG. The relative relationships between these cortical areas indicated high- to low-performance levels when performing cued motor tasks. We propose that changes in these relationships can be monitored to gauge performance increases in motor learning and rehabilitation programs. Copyright © 2013 Elsevier Inc. All rights reserved.
Lönnstedt, Oona M; Munday, Philip L; McCormick, Mark I; Ferrari, Maud C O; Chivers, Douglas P
2013-09-01
Carbon dioxide (CO2) levels in the atmosphere and surface ocean are rising at an unprecedented rate due to sustained and accelerating anthropogenic CO2 emissions. Previous studies have documented that exposure to elevated CO2 causes impaired antipredator behavior by coral reef fish in response to chemical cues associated with predation. However, whether ocean acidification will impair visual recognition of common predators is currently unknown. This study examined whether sensory compensation in the presence of multiple sensory cues could reduce the impacts of ocean acidification on antipredator responses. When exposed to seawater enriched with levels of CO2 predicted for the end of this century (880 μatm CO2), prey fish completely lost their response to conspecific alarm cues. While the visual response to a predator was also affected by high CO2, it was not entirely lost. Fish exposed to elevated CO2, spent less time in shelter than current-day controls and did not exhibit antipredator signaling behavior (bobbing) when multiple predator cues were present. They did, however, reduce feeding rate and activity levels to the same level as controls. The results suggest that the response of fish to visual cues may partially compensate for the lack of response to chemical cues. Fish subjected to elevated CO2 levels, and exposed to chemical and visual predation cues simultaneously, responded with the same intensity as controls exposed to visual cues alone. However, these responses were still less than control fish simultaneously exposed to chemical and visual predation cues. Consequently, visual cues improve antipredator behavior of CO2 exposed fish, but do not fully compensate for the loss of response to chemical cues. The reduced ability to correctly respond to a predator will have ramifications for survival in encounters with predators in the field, which could have repercussions for population replenishment in acidified oceans.
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.
A magnetoencephalography study of visual processing of pain anticipation.
Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C
2014-07-15
Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.
Unconscious cues bias first saccades in a free-saccade task.
Huang, Yu-Feng; Tan, Edlyn Gui Fang; Soon, Chun Siong; Hsieh, Po-Jang
2014-10-01
Visual-spatial attention can be biased towards salient visual information without visual awareness. It is unclear, however, whether such bias can further influence free-choices such as saccades in a free viewing task. In our experiment, we presented visual cues below awareness threshold immediately before people made free saccades. Our results showed that masked cues could influence the direction and latency of the first free saccade, suggesting that salient visual information can unconsciously influence free actions. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.
1974-01-01
The effectivess of an improved static retraining method was evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Experienced pilots were trained and then tested after 4 months without flying to compare their performance using the improved method with three methods previously evaluated. Use of the improved static retraining method resulted in no practical or significant skill degradation and was found to be even more effective than methods using a dynamic presentation of visual cues. The results suggested that properly structured open loop methods of flight control task retraining are feasible.
Suggested Interactivity: Seeking Perceived Affordances for Information Visualization.
Boy, Jeremy; Eveillard, Louis; Detienne, Françoise; Fekete, Jean-Daniel
2016-01-01
In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.
Visual speech influences speech perception immediately but not automatically.
Mitterer, Holger; Reinisch, Eva
2017-02-01
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.
Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul
2016-01-01
Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.
Chemical and visual communication during mate searching in rock shrimp.
Díaz, Eliecer R; Thiel, Martin
2004-06-01
Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.
Sensitivity to Visual Prosodic Cues in Signers and Nonsigners
ERIC Educational Resources Information Center
Brentari, Diane; Gonzalez, Carolina; Seidl, Amanda; Wilbur, Ronnie
2011-01-01
Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested…
Con-Text: Text Detection for Fine-grained Object Classification.
Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo
2017-05-24
This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).
Location cue validity affects inhibition of return of visual processing.
Wright, R D; Richard, C M
2000-01-01
Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.
Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J
2013-01-01
Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete
Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.
2013-01-01
Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140
Maloney, Erin K; Cappella, Joseph N
2016-01-01
Visual depictions of vaping in electronic cigarette advertisements may serve as smoking cues to smokers and former smokers, increasing urge to smoke and smoking behavior, and decreasing self-efficacy, attitudes, and intentions to quit or abstain. After assessing baseline urge to smoke, 301 daily smokers, 272 intermittent smokers, and 311 former smokers were randomly assigned to view three e-cigarette commercials with vaping visuals (the cue condition) or without vaping visuals (the no-cue condition), or to answer unrelated media use questions (the no-ad condition). Participants then answered a posttest questionnaire assessing the outcome variables of interest. Relative to other conditions, in the cue condition, daily smokers reported greater urge to smoke a tobacco cigarette and a marginally significantly greater incidence of actually smoking a tobacco cigarette during the experiment. Former smokers in the cue condition reported lower intentions to abstain from smoking than former smokers in other conditions. No significant differences emerged among intermittent smokers across conditions. These data suggest that visual depictions of vaping in e-cigarette commercials increase daily smokers' urge to smoke cigarettes and may lead to more actual smoking behavior. For former smokers, these cues in advertising may undermine abstinence efforts. Intermittent smokers did not appear to be reactive to these cues. A lack of significant differences between participants in the no-cue and no-ad conditions compared to the cue condition suggests that visual depictions of e-cigarettes and vaping function as smoking cues, and cue reactivity is the mechanism through which these effects were obtained.
First-Pass Processing of Value Cues in the Ventral Visual Pathway.
Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E
2018-02-19
Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sight or Scent: Lemur Sensory Reliance in Detecting Food Quality Varies with Feeding Ecology
Rushmore, Julie; Leonhardt, Sara D.; Drea, Christine M.
2012-01-01
Visual and olfactory cues provide important information to foragers, yet we know little about species differences in sensory reliance during food selection. In a series of experimental foraging studies, we examined the relative reliance on vision versus olfaction in three diurnal, primate species with diverse feeding ecologies, including folivorous Coquerel's sifakas (Propithecus coquereli), frugivorous ruffed lemurs (Varecia variegata spp), and generalist ring-tailed lemurs (Lemur catta). We used animals with known color-vision status and foods for which different maturation stages (and hence quality) produce distinct visual and olfactory cues (the latter determined chemically). We first showed that lemurs preferentially selected high-quality foods over low-quality foods when visual and olfactory cues were simultaneously available for both food types. Next, using a novel apparatus in a series of discrimination trials, we either manipulated food quality (while holding sensory cues constant) or manipulated sensory cues (while holding food quality constant). Among our study subjects that showed relatively strong preferences for high-quality foods, folivores required both sensory cues combined to reliably identify their preferred foods, whereas generalists could identify their preferred foods using either cue alone, and frugivores could identify their preferred foods using olfactory, but not visual, cues alone. Moreover, when only high-quality foods were available, folivores and generalists used visual rather than olfactory cues to select food, whereas frugivores used both cue types equally. Lastly, individuals in all three of the study species predominantly relied on sight when choosing between low-quality foods, but species differed in the strength of their sensory biases. Our results generally emphasize visual over olfactory reliance in foraging lemurs, but we suggest that the relative sensory reliance of animals may vary with their feeding ecology. PMID:22870229
Accessing long-term memory representations during visual change detection.
Beck, Melissa R; van Lamsweerde, Amanda E
2011-04-01
In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.
Role of Self-Generated Odor Cues in Contextual Representation
Aikath, Devdeep; Weible, Aldis P; Rowland, David C; Kentros, Clifford G
2014-01-01
As first demonstrated in the patient H.M., the hippocampus is critically involved in forming episodic memories, the recall of “what” happened “where” and “when.” In rodents, the clearest functional correlate of hippocampal primary neurons is the place field: a cell fires predominantly when the animal is in a specific part of the environment, typically defined relative to the available visuospatial cues. However, rodents have relatively poor visual acuity. Furthermore, they are highly adept at navigating in total darkness. This raises the question of how other sensory modalities might contribute to a hippocampal representation of an environment. Rodents have a highly developed olfactory system, suggesting that cues such as odor trails may be important. To test this, we familiarized mice to a visually cued environment over a number of days while maintaining odor cues. During familiarization, self-generated odor cues unique to each animal were collected by re-using absorbent paperboard flooring from one session to the next. Visual and odor cues were then put in conflict by counter-rotating the recording arena and the flooring. Perhaps surprisingly, place fields seemed to follow the visual cue rotation exclusively, raising the question of whether olfactory cues have any influence at all on a hippocampal spatial representation. However, subsequent removal of the familiar, self-generated odor cues severely disrupted both long-term stability and rotation to visual cues in a novel environment. Our data suggest that odor cues, in the absence of additional rule learning, do not provide a discriminative spatial signal that anchors place fields. Such cues do, however, become integral to the context over time and exert a powerful influence on the stability of its hippocampal representation. © 2014 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:24753119
Phasic alertness cues modulate visual processing speed in healthy aging.
Haupt, Marleen; Sorg, Christian; Napiórkowski, Natan; Finke, Kathrin
2018-05-31
Warning signals temporarily increase the rate of visual information in younger participants and thus optimize perception in critical situations. It is unclear whether such important preparatory processes are preserved in healthy aging. We parametrically assessed the effects of auditory alertness cues on visual processing speed and their time course using a whole report paradigm based on the computational Theory of Visual Attention. We replicated prior findings of significant alerting benefits in younger adults. In conditions with short cue-target onset asynchronies, this effect was baseline-dependent. As younger participants with high baseline speed did not show a profit, an inverted U-shaped function of phasic alerting and visual processing speed was implied. Older adults also showed a significant cue-induced benefit. Bayesian analyses indicated that the cueing benefit on visual processing speed was comparably strong across age groups. Our results indicate that in aging individuals, comparable to younger ones, perception is active and increased expectancy of the appearance of a relevant stimulus can increase the rate of visual information uptake. Copyright © 2018 Elsevier Inc. All rights reserved.
Orientation Preferences and Motion Sickness Induced in a Virtual Reality Environment.
Chen, Wei; Chao, Jian-Gang; Zhang, Yan; Wang, Jin-Kun; Chen, Xue-Wen; Tan, Cheng
2017-10-01
Astronauts' orientation preferences tend to correlate with their susceptibility to space motion sickness (SMS). Orientation preferences appear universally, since variable sensory cue priorities are used between individuals. However, SMS susceptibility changes after proper training, while orientation preferences seem to be intrinsic proclivities. The present study was conducted to investigate whether orientation preferences change if susceptibility is reduced after repeated exposure to a virtual reality (VR) stimulus environment that induces SMS. A horizontal supine posture was chosen to create a sensory context similar to weightlessness, and two VR devices were used to produce a highly immersive virtual scene. Subjects were randomly allocated to an experimental group (trained through exposure to a provocative rotating virtual scene) and a control group (untrained). All subjects' orientation preferences were measured twice with the same interval, but the experimental group was trained three times during the interval, while the control group was not. Trained subjects were less susceptible to SMS, with symptom scores reduced by 40%. Compared with untrained subjects, trained subjects' orientation preferences were significantly different between pre- and posttraining assessments. Trained subjects depended less on visual cues, whereas few subjects demonstrated the opposite tendency. Results suggest that visual information may be inefficient and unreliable for body orientation and stabilization in a rotating visual scene, while reprioritizing preferences for different sensory cues was dynamic and asymmetric between individuals. The present findings should facilitate customization of efficient and proper training for astronauts with different sensory prioritization preferences and dynamic characteristics.Chen W, Chao J-G, Zhang Y, Wang J-K, Chen X-W, Tan C. Orientation preferences and motion sickness induced in a virtual reality environment. Aerosp Med Hum Perform. 2017; 88(10):903-910.
Naicker, Preshanta; Anoopkumar-Dukie, Shailendra; Grant, Gary D; Modenese, Luca; Kavanagh, Justin J
2017-02-01
Anticholinergic medications largely exert their effects due to actions on the muscarinic receptor, which mediates the functions of acetylcholine in the peripheral and central nervous systems. In the central nervous system, acetylcholine plays an important role in the modulation of movement. This study investigated the effects of over-the-counter medications with varying degrees of central anticholinergic properties on fixation stability, saccadic response time and the dynamics associated with this eye movement during a temporally-cued visual reaction time task, in order to establish the significance of central cholinergic pathways in influencing eye movements during reaction time tasks. Twenty-two participants were recruited into the placebo-controlled, human double-blind, four-way crossover investigation. Eye tracking technology recorded eye movements while participants reacted to visual stimuli following temporally informative and uninformative cues. The task was performed pre-ingestion as well as 0.5 and 2 h post-ingestion of promethazine hydrochloride (strong centrally acting anticholinergic), hyoscine hydrobromide (moderate centrally acting anticholinergic), hyoscine butylbromide (anticholinergic devoid of central properties) and a placebo. Promethazine decreased fixation stability during the reaction time task. In addition, promethazine was the only drug to increase saccadic response time during temporally informative and uninformative cued trials, whereby effects on response time were more pronounced following temporally informative cues. Promethazine also decreased saccadic amplitude and increased saccadic duration during the temporally-cued reaction time task. Collectively, the results of the study highlight the significant role that central cholinergic pathways play in the control of eye movements during tasks that involve stimulus identification and motor responses following temporal cues.
The Effects of Spatial Endogenous Pre-cueing across Eccentricities
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353
The Effects of Spatial Endogenous Pre-cueing across Eccentricities.
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.
Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments
ERIC Educational Resources Information Center
Koutsoklenis, Athanasios; Papadopoulos, Konstantinos
2014-01-01
Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…
Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.
Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu
2015-09-30
Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.
Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.
2005-01-01
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.
Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343
Visual selective attention in amnestic mild cognitive impairment.
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
2014-11-01
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Temporal Dynamics of Visual Attention Measured with Event-Related Potentials
Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi
2013-01-01
How attentional modulation on brain activities determines behavioral performance has been one of the most important issues in cognitive neuroscience. This issue has been addressed by comparing the temporal relationship between attentional modulations on neural activities and behavior. Our previous study measured the time course of attention with amplitude and phase coherence of steady-state visual evoked potential (SSVEP) and found that the modulation latency of phase coherence rather than that of amplitude was consistent with the latency of behavioral performance. In this study, as a complementary report, we compared the time course of visual attention shift measured by event-related potentials (ERPs) with that by target detection task. We developed a novel technique to compare ERPs with behavioral results and analyzed the EEG data in our previous study. Two sets of flickering stimulus at different frequencies were presented in the left and right visual hemifields, and a target or distracter pattern was presented randomly at various moments after an attention-cue presentation. The observers were asked to detect targets on the attended stimulus after the cue. We found that two ERP components, P300 and N2pc, were elicited by the target presented at the attended location. Time-course analyses revealed that attentional modulation of the P300 and N2pc amplitudes increased gradually until reaching a maximum and lasted at least 1.5 s after the cue onset, which is similar to the temporal dynamics of behavioral performance. However, attentional modulation of these ERP components started later than that of behavioral performance. Rather, the time course of attentional modulation of behavioral performance was more closely associated with that of the concurrently recorded SSVEPs analyzed. These results suggest that neural activities reflected not by either the P300 or N2pc, but by the SSVEPs, are the source of attentional modulation of behavioral performance. PMID:23976966
Nackaerts, Evelien; Nieuwboer, Alice; Broeder, Sanne; Swinnen, Stephan; Vandenberghe, Wim; Heremans, Elke
2018-02-01
Recently, it was shown that patients with Parkinson's disease (PD) and freezing of gait (FOG) can also experience freezing episodes during handwriting and present writing problems outside these episodes. So far, the neural networks underlying increased handwriting problems in subjects with FOG are unclear. This study used dynamic causal modeling of fMRI data to investigate neural network dynamics underlying freezing-related handwriting problems and how these networks changed in response to visual cues. Twenty-seven non-freezers and ten freezers performed a pre-writing task with and without visual cues in the scanner with their right hand. The results showed that freezers and non-freezers were able to recruit networks involved in cued and uncued writing in a similar fashion. Whole group analysis also revealed a trend towards altered visuomotor integration in patients with FOG. Next, we controlled for differences in disease severity between both patient groups using a sensitivity analysis. For this, a subgroup of ten non-freezers matched for disease severity was selected by an independent researcher. This analysis further exposed significantly weaker coupling in mostly left hemispheric visuo-parietal, parietal - supplementary motor area, parietal - premotor, and premotor-M1 pathways in freezers compared to non-freezers, irrespective of cues. Correlation analyses revealed that these impairments in connectivity were related to writing amplitude and quality. Taken together, these findings show that freezers have reduced involvement of the supplementary motor area in the motor network, which explains the impaired writing amplitude regulation in this group. In addition, weaker supportive premotor connectivity may have contributed to micrographia in freezers, a pattern that was independent of cueing.
Studies of human dynamic space orientation using techniques of control theory
NASA Technical Reports Server (NTRS)
Young, L. R.
1974-01-01
Studies of human orientation and manual control in high order systems are summarized. Data cover techniques for measuring and altering orientation perception, role of non-visual motion sensors, particularly the vestibular and tactile sensors, use of motion cues in closed loop control of simple stable and unstable systems, and advanced computer controlled display systems.
Control of self-motion in dynamic fluids: fish do it differently from bees.
Scholtyssek, Christine; Dacke, Marie; Kröger, Ronald; Baird, Emily
2014-05-01
To detect and avoid collisions, animals need to perceive and control the distance and the speed with which they are moving relative to obstacles. This is especially challenging for swimming and flying animals that must control movement in a dynamic fluid without reference from physical contact to the ground. Flying animals primarily rely on optic flow to control flight speed and distance to obstacles. Here, we investigate whether swimming animals use similar strategies for self-motion control to flying animals by directly comparing the trajectories of zebrafish (Danio rerio) and bumblebees (Bombus terrestris) moving through the same experimental tunnel. While moving through the tunnel, black and white patterns produced (i) strong horizontal optic flow cues on both walls, (ii) weak horizontal optic flow cues on both walls and (iii) strong optic flow cues on one wall and weak optic flow cues on the other. We find that the mean speed of zebrafish does not depend on the amount of optic flow perceived from the walls. We further show that zebrafish, unlike bumblebees, move closer to the wall that provides the strongest visual feedback. This unexpected preference for strong optic flow cues may reflect an adaptation for self-motion control in water or in environments where visibility is limited. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Neural Mechanism for Mirrored Self-face Recognition.
Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta
2015-09-01
Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.
Impaired movement timing in neurological disorders: rehabilitation and treatment strategies.
Hove, Michael J; Keller, Peter E
2015-03-01
Timing abnormalities have been reported in many neurological disorders, including Parkinson's disease (PD). In PD, motor-timing impairments are especially debilitating in gait. Despite impaired audiomotor synchronization, PD patients' gait improves when they walk with an auditory metronome or with music. Building on that research, we make recommendations for optimizing sensory cues to improve the efficacy of rhythmic cuing in gait rehabilitation. Adaptive rhythmic metronomes (that synchronize with the patient's walking) might be especially effective. In a recent study we showed that adaptive metronomes synchronized consistently with PD patients' footsteps without requiring attention; this improved stability and reinstated healthy gait dynamics. Other strategies could help optimize sensory cues for gait rehabilitation. Groove music strongly engages the motor system and induces movement; bass-frequency tones are associated with movement and provide strong timing cues. Thus, groove and bass-frequency pulses could deliver potent rhythmic cues. These strategies capitalize on the close neural connections between auditory and motor networks; and auditory cues are typically preferred. However, moving visual cues greatly improve visuomotor synchronization and could warrant examination in gait rehabilitation. Together, a treatment approach that employs groove, auditory, bass-frequency, and adaptive (GABA) cues could help optimize rhythmic sensory cues for treating motor and timing deficits. © 2014 New York Academy of Sciences.
Neural Mechanism for Mirrored Self-face Recognition
Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta
2015-01-01
Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Getzmann, Stephan; Wascher, Edmund
2017-02-01
Speech understanding in the presence of concurring sound is a major challenge especially for older persons. In particular, conversational turn-takings usually result in switch costs, as indicated by declined speech perception after changes in the relevant target talker. Here, we investigated whether visual cues indicating the future position of a target talker may reduce the costs of switching in younger and older adults. We employed a speech perception task, in which sequences of short words were simultaneously presented by three talkers, and analysed behavioural measures and event-related potentials (ERPs). Informative cues resulted in increased performance after a spatial change in target talker compared to uninformative cues, not indicating the future target position. Especially the older participants benefited from knowing the future target position in advance, indicated by reduced response times after informative cues. The ERP analysis revealed an overall reduced N2, and a reduced P3b to changes in the target talker location in older participants, suggesting reduced inhibitory control and context updating. On the other hand, a pronounced frontal late positive complex (f-LPC) to the informative cues indicated increased allocation of attentional resources to changes in target talker in the older group, in line with the decline-compensation hypothesis. Thus, knowing where to listen has the potential to compensate for age-related decline in attentional switching in a highly variable cocktail-party environment. Copyright © 2016 Elsevier B.V. All rights reserved.
Nackaerts, Evelien; Nieuwboer, Alice; Broeder, Sanne; Smits-Engelsman, Bouwien C M; Swinnen, Stephan P; Vandenberghe, Wim; Heremans, Elke
2016-06-01
Handwriting is often impaired in Parkinson's disease (PD). Several studies have shown that writing in PD benefits from the use of cues. However, this was typically studied with writing and drawing sizes that are usually not used in daily life. This study examines the effect of visual cueing on a prewriting task at small amplitudes (≤1.0 cm) in PD patients and healthy controls to better understand the working action of cueing for writing. A total of 15 PD patients and 15 healthy, age-matched controls performed a prewriting task at 0.6 cm and 1.0 cm in the presence and absence of visual cues (target lines). Writing amplitude, variability of amplitude, and speed were chosen as dependent variables, measured using a newly developed touch-sensitive tablet. Cueing led to immediate improvements in writing size, variability of writing size, and speed in both groups in the 1.0 cm condition. However, when writing at 0.6 cm with cues, a decrease in writing size was apparent in both groups (P < .001) and the difference in variability of amplitude between cued and uncued writing disappeared. In addition, the writing speed of controls decreased when the cue was present. Visual target lines of 1.0 cm improved the writing of sequential loops in contrast to lines spaced at 0.6 cm. These results illustrate that, unlike for gait, visual cueing for fine-motor tasks requires a differentiated approach, taking into account the possible increases of accuracy constraints imposed by cueing. © The Author(s) 2015.
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A
2017-01-01
External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.
ERIC Educational Resources Information Center
Campbell, Emily; Cuba, Melissa
2015-01-01
The goal of this action research is to increase student awareness of context clues, with an emphasis on student use of visual cues in making predictions. Visual cues in the classroom were used to differentiate according to the needs of student demographics (Herrera, Perez, & Escamilla, 2010). The purpose of this intervention was to improve…
The Effects of Various Fidelity Factors on Simulated Helicopter Hover
1981-01-01
18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation
Learning from Instructional Animations: How Does Prior Knowledge Mediate the Effect of Visual Cues?
ERIC Educational Resources Information Center
Arslan-Ari, I.
2018-01-01
The purpose of this study was to investigate the effects of cueing and prior knowledge on learning and mental effort of students studying an animation with narration. This study employed a 2 (no cueing vs. visual cueing) × 2 (low vs. high prior knowledge) between-subjects factorial design. The results revealed a significant interaction effect…
Anemonefishes rely on visual and chemical cues to correctly identify conspecifics
NASA Astrophysics Data System (ADS)
Johnston, Nicole K.; Dixson, Danielle L.
2017-09-01
Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Laserlight cues for gait freezing in Parkinson's disease: an open-label study.
Donovan, S; Lim, C; Diaz, N; Browner, N; Rose, P; Sudarsky, L R; Tarsy, D; Fahn, S; Simon, D K
2011-05-01
Freezing of gait (FOG) and falls are major sources of disability for Parkinson's disease (PD) patients, and show limited responsiveness to medications. We assessed the efficacy of visual cues for overcoming FOG in an open-label study of 26 patients with PD. The change in the frequency of falls was a secondary outcome measure. Subjects underwent a 1-2 month baseline period of use of a cane or walker without visual cues, followed by 1 month using the same device with the laserlight visual cue. The laserlight visual cue was associated with a modest but significant mean reduction in FOG Questionnaire (FOGQ) scores of 1.25 ± 0.48 (p = 0.0152, two-tailed paired t-test), representing a 6.6% improvement compared to the mean baseline FOGQ scores of 18.8. The mean reduction in fall frequency was 39.5 ± 9.3% with the laserlight visual cue among subjects experiencing at least one fall during the baseline and subsequent study periods (p = 0.002; two-tailed one-sample t-test with hypothesized mean of 0). Though some individual subjects may have benefited, the overall mean performance on the timed gait test (TGT) across all subjects did not significantly change. However, among the 4 subjects who underwent repeated testing of the TGT, one showed a 50% mean improvement in TGT performance with the laserlight visual cue (p = 0.005; two-tailed paired t-test). This open-label study provides evidence for modest efficacy of a laserlight visual cue in overcoming FOG and reducing falls in PD patients. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Vernetti, Angélina; Smith, Tim J; Senju, Atsushi
2017-03-15
While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue-reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue-reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. © 2017 The Authors.
Contextual cueing impairment in patients with age-related macular degeneration.
Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan
2013-09-12
Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
The Vestibular System and Human Dynamic Space Orientation
NASA Technical Reports Server (NTRS)
Meiry, J. L.
1966-01-01
The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.
Visual speech segmentation: using facial cues to locate word boundaries in continuous speech
Mitchel, Aaron D.; Weiss, Daniel J.
2014-01-01
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577
Man-in-the-control-loop simulation of manipulators
NASA Technical Reports Server (NTRS)
Chang, J. L.; Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
A method to achieve man-in-the-control-loop simulation is presented. Emerging real-time dynamics simulation suggests a potential for creating an interactive design workstation with a human operator in the control loop. The recursive formulation for multibody dynamics simulation is studied to determine requirements for man-in-the-control-loop simulation. High speed computer graphics techniques provides realistic visual cues for the simulator. Backhoe and robot arm simulations are implemented to demonstrate the capability of man-in-the-control-loop simulation.
Visual gate for brain-computer interfaces.
Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H
2009-01-01
Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.
Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F
2013-04-01
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.
Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T
2015-10-01
Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
Value associations of irrelevant stimuli modify rapid visual orienting.
Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E
2010-08-01
In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.
Determinants of structural choice in visually situated sentence production.
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2012-11-01
Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.
Selective maintenance in visual working memory does not require sustained visual attention.
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M
2013-08-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
ERIC Educational Resources Information Center
Wang, Pei-Yu; Huang, Chung-Kai
2015-01-01
This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…
The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.
Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal
2016-01-01
Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.
Automaticity of phasic alertness: Evidence for a three-component model of visual cueing.
Lin, Zhicheng; Lu, Zhong-Lin
2016-10-01
The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue-double cue-is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue-single cue-that is being mixed (80 % vs. 50 % valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, then, top-down influences-in the form of contextual relevance and cue awareness-can have opposite influences on the cueing effect from the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention-orienting, alerting, and inhibition-to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital.
Neural substrates of smoking cue reactivity: A meta-analysis of fMRI studies
Engelmann, Jeffrey M.; Versace, Francesco; Robinson, Jason D.; Minnix, Jennifer A.; Lam, Cho Y.; Cui, Yong; Brown, Victoria L.; Cinciripini, Paul M.
2012-01-01
Reactivity to smoking-related cues may be an important factor that precipitates relapse in smokers who are trying to quit. The neurobiology of smoking cue reactivity has been investigated in several fMRI studies. We combined the results of these studies using activation likelihood estimation, a meta-analytic technique for fMRI data. Results of the meta-analysis indicated that smoking cues reliably evoke larger fMRI responses than neutral cues in the extended visual system, precuneus, posterior cingulate gyrus, anterior cingulate gyrus, dorsal and medial prefrontal cortex, insula, and dorsal striatum. Subtraction meta-analyses revealed that parts of the extended visual system and dorsal prefrontal cortex are more reliably responsive to smoking cues in deprived smokers than in non-deprived smokers, and that short-duration cues presented in event-related designs produce larger responses in the extended visual system than long-duration cues presented in blocked designs. The areas that were found to be responsive to smoking cues agree with theories of the neurobiology of cue reactivity, with two exceptions. First, there was a reliable cue reactivity effect in the precuneus, which is not typically considered a brain region important to addiction. Second, we found no significant effect in the nucleus accumbens, an area that plays a critical role in addiction, but this effect may have been due to technical difficulties associated with measuring fMRI data in that region. The results of this meta-analysis suggest that the extended visual system should receive more attention in future studies of smoking cue reactivity. PMID:22206965
Booth, Ashley J; Elliott, Mark T
2015-01-01
The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.
Ambert-Dahan, Emmanuèle; Giraud, Anne-Lise; Mecheri, Halima; Sterkers, Olivier; Mosnier, Isabelle; Samson, Séverine
2017-10-01
Visual processing has been extensively explored in deaf subjects in the context of verbal communication, through the assessment of speech reading and sign language abilities. However, little is known about visual emotional processing in adult progressive deafness, and after cochlear implantation. The goal of our study was thus to assess the influence of acquired post-lingual progressive deafness on the recognition of dynamic facial emotions that were selected to express canonical fear, happiness, sadness, and anger. A total of 23 adults with post-lingual deafness separated into two groups; those assessed either before (n = 10) and those assessed after (n = 13) cochlear implantation (CI); and 13 normal hearing (NH) individuals participated in the current study. Participants were asked to rate the expression of the four cardinal emotions, and to evaluate both their emotional valence (unpleasant-pleasant) and arousal potential (relaxing-stimulating). We found that patients with deafness were impaired in the recognition of sad faces, and that patients equipped with a CI were additionally impaired in the recognition of happiness and fear (but not anger). Relative to controls, all patients with deafness showed a deficit in perceiving arousal expressed in faces, while valence ratings remained unaffected. The current results show for the first time that acquired and progressive deafness is associated with a reduction of emotional sensitivity to visual stimuli. This negative impact of progressive deafness on the perception of dynamic facial cues for emotion recognition contrasts with the proficiency of deaf subjects with and without CIs in processing visual speech cues (Rouger et al., 2007; Strelnikov et al., 2009; Lazard and Giraud, 2017). Altogether these results suggest there to be a trade-off between the processing of linguistic and non-linguistic visual stimuli. Copyright © 2017. Published by Elsevier B.V.
Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues
ERIC Educational Resources Information Center
Comeaux, Ian; McDonald, Janet L.
2018-01-01
Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Williams, Steven P.
1993-01-01
To provide stereopsis, binocular helmet-mounted display (HMD) systems must trade some of the total field of view available from their two monocular fields to obtain a partial overlap region. The visual field then provides a mixture of cues, with monocular regions on both peripheries and a binoptic (the same image in both eyes) region or, if lateral disparity is introduced to produce two images, a stereoscopic region in the overlapped center. This paper reports on in-simulator assessment of the trade-offs arising from the mixture of color cueing and monocular, binoptic, and stereoscopic cueing information in peripheral monitoring displays as utilized in HMD systems. The accompanying effect of stereoscopic cueing in the tracking information in the central region of the display is also assessed. The pilot's task for the study was to fly at a prescribed height above an undulating pathway in the sky while monitoring a dynamic bar chart displayed in the periphery of their field of view. Control of the simulated rotorcraft was limited to the longitudinal and vertical degrees of freedom to ensure the lateral separation of the viewing conditions of the concurrent tasks.
Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre
2012-01-01
The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700
Preschoolers Benefit from Visually Salient Speech Cues
ERIC Educational Resources Information Center
Lalonde, Kaylah; Holt, Rachael Frush
2015-01-01
Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…
Despite extensive genetic, biochemical and structural studies on Escherichia coli RNA polymerase (RNAP), little is known about its location and distribution in response to environmental changes. To visualize the RNAP by fluorescence microscopy in E. coli under different physiological conditions, we constructed a functional rpoC-gfp gene fusion on the chromosome.
ERIC Educational Resources Information Center
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2012-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…
The First Time Ever I Saw Your Feet: Inversion Effect in Newborns' Sensitivity to Biological Motion
ERIC Educational Resources Information Center
Bardi, Lara; Regolin, Lucia; Simion, Francesca
2014-01-01
Inversion effect in biological motion perception has been recently attributed to an innate sensitivity of the visual system to the gravity-dependent dynamic of the motion. However, the specific cues that determine the inversion effect in naïve subjects were never investigated. In the present study, we have assessed the contribution of the local…
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Age-related changes in human posture control: Sensory organization tests
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Black, F. O.
1989-01-01
Postural control was measured in 214 human subjects ranging in age from 7 to 81 years. Sensory organization tests measured the magnitude of anterior-posterior body sway during six 21 s trials in which visual and somatosensory orientation cues were altered (by rotating the visual surround and support surface in proportion to the subject's sway) or vision eliminated (eyes closed) in various combinations. No age-related increase in postural sway was found for subjects standing on a fixed support surface with eyes open or closed. However, age-related increases in sway were found for conditions involving altered visual or somatosensory cues. Subjects older than about 55 years showed the largest sway increases. Subjects younger than about 15 years were also sensitive to alteration of sensory cues. On average, the older subjects were more affected by altered visual cues whereas younger subjects had more difficulty with altered somatosensory cues.
Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu
2018-06-01
Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lee, I-Jui; Chen, Chien-Hsu; Lin, Ling-Yi
2016-01-01
Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotional expressions on other people's faces. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. In this study, we used software technology to create half-static and dynamic video materials to teach adolescents with ASD how to become aware of six basic facial expressions observed in real situations. This intervention system provides a half-way point via a dynamic video of a specific element within a static-surrounding frame to strengthen the ability of the six adolescents with ASD to attract their attention on the relevant dynamic facial expressions and ignore irrelevant ones. Using a multiple baseline design across participants, we found that the intervention learning system provided a simple yet effective way for adolescents with ASD to attract their attention on the nonverbal facial cues; the intervention helped them better understand and judge others' facial emotions. We conclude that the limited amount of information with structured and specific close-up visual social cues helped the participants improve judgments of the emotional meaning of the facial expressions of others.
The role of visuohaptic experience in visually perceived depth.
Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S
2009-06-01
Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.
1977-01-01
Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.
Social Vision: Functional Forecasting and the Integration of Compound Social Cues
Adams, Reginald B.; Kveraga, Kestutis
2017-01-01
For decades the study of social perception was largely compartmentalized by type of social cue: race, gender, emotion, eye gaze, body language, facial expression etc. This was partly due to good scientific practice (e.g., controlling for extraneous variability), and partly due to assumptions that each type of social cue was functionally distinct from others. Herein, we present a functional forecast approach to understanding compound social cue processing that emphasizes the importance of shared social affordances across various cues (see too Adams, Franklin, Nelson, & Stevenson, 2010; Adams & Nelson, 2011; Weisbuch & Adams, 2012). We review the traditional theories of emotion and face processing that argued for dissociable and noninteracting pathways (e.g., for specific emotional expressions, gaze, identity cues), as well as more recent evidence for combinatorial processing of social cues. We argue here that early, and presumably reflexive, visual integration of such cues is necessary for adaptive behavioral responding to others. In support of this claim, we review contemporary work that reveals a flexible visual system, one that readily incorporates meaningful contextual influences in even nonsocial visual processing, thereby establishing the functional and neuroanatomical bases necessary for compound social cue integration. Finally, we explicate three likely mechanisms driving such integration. Together, this work implicates a role for cognitive penetrability in visual perceptual abilities that have often been (and in some cases still are) ascribed to direct encapsulated perceptual processes. PMID:29242738
Park, Seong-Beom; Lee, Inah
2016-08-01
Place cells in the hippocampus fire at specific positions in space, and distal cues in the environment play critical roles in determining the spatial firing patterns of place cells. Many studies have shown that place fields are influenced by distal cues in foraging animals. However, it is largely unknown whether distal-cue-dependent changes in place fields appear in different ways in a memory task if distal cues bear direct significance to achieving goals. We investigated this possibility in this study. Rats were trained to choose different spatial positions in a radial arm in association with distal cue configurations formed by visual cue sets attached to movable curtains around the apparatus. The animals were initially trained to associate readily discernible distal cue configurations (0° vs. 80° angular separation between distal cue sets) with different food-well positions and then later experienced ambiguous cue configurations (14° and 66°) intermixed with the original cue configurations. Rats showed no difficulty in transferring the associated memory formed for the original cue configurations when similar cue configurations were presented. Place field positions remained at the same locations across different cue configurations, whereas stability and coherence of spatial firing patterns were significantly disrupted when ambiguous cue configurations were introduced. Furthermore, the spatial representation was extended backward and skewed more negatively at the population level when processing ambiguous cue configurations, compared with when processing the original cue configurations only. This effect was more salient for large cue-separation conditions than for small cue-separation conditions. No significant rate remapping was observed across distal cue configurations. These findings suggest that place cells in the hippocampus dynamically change their detailed firing characteristics in response to a modified cue environment and that some of the firing properties previously reported in a foraging task might carry more functional weight than others when tested in a distal-cue-dependent memory task. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
EEG Dynamics of a Go/Nogo Task in Children with ADHD
Baijot, Simon; Zarka, David; Leroy, Axelle; Slama, Hichem; Colin, Cecile; Deconinck, Nicolas; Dan, Bernard; Cheron, Guy
2017-01-01
Background: Studies investigating event-related potential (ERP) evoked in a Cue-Go/NoGo paradigm have shown lower frontal N1, N2 and central P3 in children with attention-deficit/hyperactivity disorder (ADHD) compared to typically developing children (TDC). However, the electroencephalographic (EEG) dynamics underlying these ERPs remain largely unexplored in ADHD. Methods: We investigate the event-related spectral perturbation and inter-trial coherence linked to the ERP triggered by visual Cue-Go/NoGo stimuli, in 14 children (7 ADHD and 7 TDC) aged 8 to 12 years. Results: Compared to TDC, the EEG dynamics of children with ADHD showed a lower theta-alpha ITC concomitant to lower occipito-parietal P1-N2 and frontal N1-P2 potentials in response to Cue, Go and Nogo stimuli; an upper alpha power preceding lower central Go-P3; a lower theta-alpha power and ITC were coupled to a lower frontal Nogo-N3; a lower low-gamma power overall scalp at 300 ms after Go and Nogo stimuli. Conclusion: These findings suggest impaired ability in children with ADHD to conserve the brain oscillations phase associated with stimulus processing. This physiological trait might serve as a target for therapeutic intervention or be used as monitoring of their effects. PMID:29261133
Olivier, Agnès; Faugloire, Elise; Lejeune, Laure; Biau, Sophie; Isableu, Brice
2017-01-01
Maintaining equilibrium while riding a horse is a challenging task that involves complex sensorimotor processes. We evaluated the relative contribution of visual information (static or dynamic) to horseback riders' postural stability (measured from the variability of segment position in space) and the coordination modes they adopted to regulate balance according to their level of expertise. Riders' perceptual typologies and their possible relation to postural stability were also assessed. Our main assumption was that the contribution of visual information to postural control would be reduced among expert riders in favor of vestibular and somesthetic reliance. Twelve Professional riders and 13 Club riders rode an equestrian simulator at a gallop under four visual conditions: (1) with the projection of a simulated scene reproducing what a rider sees in the real context of a ride in an outdoor arena, (2) under stroboscopic illumination, preventing access to dynamic visual cues, (3) in normal lighting but without the projected scene (i.e., without the visual consequences of displacement) and (4) with no visual cues. The variability of the position of the head, upper trunk and lower trunk was measured along the anteroposterior (AP), mediolateral (ML), and vertical (V) axes. We computed discrete relative phase to assess the coordination between pairs of segments in the anteroposterior axis. Visual field dependence-independence was evaluated using the Rod and Frame Test (RFT). The results showed that the Professional riders exhibited greater overall postural stability than the Club riders, revealed mainly in the AP axis. In particular, head variability was lower in the Professional riders than in the Club riders in visually altered conditions, suggesting a greater ability to use vestibular and somesthetic information according to task constraints with expertise. In accordance with this result, RFT perceptual scores revealed that the Professional riders were less dependent on the visual field than were the Club riders. Finally, the Professional riders exhibited specific coordination modes that, unlike the Club riders, departed from pure in-phase and anti-phase patterns and depended on visual conditions. The present findings provide evidence of major differences in the sensorimotor processes contributing to postural control with expertise in horseback riding. PMID:28194100
Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.
2017-01-01
External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862
Visual Depth from Motion Parallax and Eye Pursuit
Stroyan, Keith; Nawrot, Mark
2012-01-01
A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531
Are face representations depth cue invariant?
Dehmoobadsharifabadi, Armita; Farivar, Reza
2016-06-01
The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.
Are multiple visual short-term memory storages necessary to explain the retro-cue effect?
Makovski, Tal
2012-06-01
Recent research has shown that change detection performance is enhanced when, during the retention interval, attention is cued to the location of the upcoming test item. This retro-cue advantage has led some researchers to suggest that visual short-term memory (VSTM) is divided into a durable, limited-capacity storage and a more fragile, high-capacity storage. Consequently, performance is poor on the no-cue trials because fragile VSTM is overwritten by the test display and only durable VSTM is accessible under these conditions. In contrast, performance is improved in the retro-cue condition because attention keeps fragile VSTM accessible. The aim of the present study was to test the assumptions underlying this two-storage account. Participants were asked to encode an array of colors for a change detection task involving no-cue and retro-cue trials. A retro-cue advantage was found even when the cue was presented after a visual (Experiment 1) or a central (Experiment 2) interference. Furthermore, the magnitude of the interference was comparable between the no-cue and retro-cue trials. These data undermine the main empirical support for the two-storage account and suggest that the presence of a retro-cue benefit cannot be used to differentiate between different VSTM storages.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion B.
2011-01-01
Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.
Helicopter flight simulation motion platform requirements
NASA Astrophysics Data System (ADS)
Schroeder, Jeffery Allyn
Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion
Calabro, Finnegan J.; Vaina, Lucia Maria
2016-01-01
Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.
Calabro, Finnegan J; Vaina, Lucia Maria
2016-05-27
BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.
Heuristics of Reasoning and Analogy in Children's Visual Perspective Taking.
ERIC Educational Resources Information Center
Yaniv, Ilan; Shatz, Marilyn
1990-01-01
In three experiments, children of three through six years of age were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight was salient. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed to objects facilitated…
Enhancing Interactive Tutorial Effectiveness through Visual Cueing
ERIC Educational Resources Information Center
Jamet, Eric; Fernandez, Jonathan
2016-01-01
The present study investigated whether learning how to use a web service with an interactive tutorial can be enhanced by cueing. We expected the attentional guidance provided by visual cues to facilitate the selection of information in static screen displays that corresponded to spoken explanations. Unlike most previous studies in this area, we…
The Influence of Alertness on Spatial and Nonspatial Components of Visual Attention
ERIC Educational Resources Information Center
Matthias, Ellen; Bublak, Peter; Muller, Hermann J.; Schneider, Werner X.; Krummenacher, Joseph; Finke, Kathrin
2010-01-01
Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus…
Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load
ERIC Educational Resources Information Center
Santangelo, Valerio; Spence, Charles
2007-01-01
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…
Goebl, Werner
2015-01-01
Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610
NASA Astrophysics Data System (ADS)
Ramirez, Joshua; Mann, Virginia
2005-08-01
Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.
Sato, Naoyuki; Yamaguchi, Yoko
2009-06-01
The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.
Adaptability and specificity of inhibition processes in distractor-induced blindness.
Winther, Gesche N; Niedeggen, Michael
2017-12-01
In a rapid serial visual presentation task, inhibition processes cumulatively impair processing of a target possessing distractor properties. This phenomenon-known as distractor-induced blindness-has thus far only been elicited using dynamic visual features, such as motion and orientation changes. In three ERP experiments, we used a visual object feature-color-to test for the adaptability and specificity of the effect. In Experiment I, participants responded to a color change (target) in the periphery whose onset was signaled by a central cue. Presentation of irrelevant color changes prior to the cue (distractors) led to reduced target detection, accompanied by a frontal ERP negativity that increased with increasing number of distractors, similar to the effects previously found for dynamic targets. This suggests that distractor-induced blindness is adaptable to color features. In Experiment II, the target consisted of coherent motion contrasting the color distractors. Correlates of distractor-induced blindness were found neither in the behavioral nor in the ERP data, indicating a feature specificity of the process. Experiment III confirmed the strict distinction between congruent and incongruent distractors: A single color distractor was embedded in a stream of motion distractors with the target consisting of a coherent motion. While behavioral performance was affected by the distractors, the color distractor did not elicit a frontal negativity. The experiments show that distractor-induced blindness is also triggered by visual stimuli predominantly processed in the ventral stream. The strict specificity of the central inhibition process also applies to these stimulus features. © 2017 Society for Psychophysiological Research.
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
Functional Connectivity in Frequency-Tagged Cortical Networks During Active Harm Avoidance
Miskovic, Vladimir; Príncipe, José C.; Keil, Andreas
2015-01-01
Abstract Many behavioral and cognitive processes are grounded in widespread and dynamic communication between brain regions. Thus, the quantification of functional connectivity with high temporal resolution is highly desirable for capturing in vivo brain function. However, many of the commonly used measures of functional connectivity capture only linear signal dependence and are based entirely on relatively simple quantitative measures such as mean and variance. In this study, the authors used a recently developed algorithm, the generalized measure of association (GMA), to quantify dynamic changes in cortical connectivity using steady-state visual evoked potentials (ssVEPs) measured in the context of a conditioned behavioral avoidance task. GMA uses a nonparametric estimator of statistical dependence based on ranks that are efficient and capable of providing temporal precision roughly corresponding to the timing of cognitive acts (∼100–200 msec). Participants viewed simple gratings predicting the presence/absence of an aversive loud noise, co-occurring with peripheral cues indicating whether the loud noise could be avoided by means of a key press (active) or not (passive). For active compared with passive trials, heightened connectivity between visual and central areas was observed in time segments preceding and surrounding the avoidance cue. Viewing of the threat stimuli also led to greater initial connectivity between occipital and central regions, followed by heightened local coupling among visual regions surrounding the motor response. Local neural coupling within extended visual regions was sustained throughout major parts of the viewing epoch. These findings are discussed in a framework of flexible synchronization between cortical networks as a function of experience and active sensorimotor coupling. PMID:25557925
ERIC Educational Resources Information Center
van Moorselaar, Dirk; Olivers, Christian N. L.; Theeuwes, Jan; Lamme, Victor A. F.; Sligte, Ilja G.
2015-01-01
Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM…
Task-relevant information is prioritized in spatiotemporal contextual cueing.
Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun
2016-11-01
Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.
A model for the pilot's use of motion cues in roll-axis tracking tasks
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
Tosoni, Annalisa; Shulman, Gordon L.; Pope, Anna L. W.; McAvoy, Mark P.; Corbetta, Maurizio
2012-01-01
Success in a dynamically changing world requires both rapid shifts of attention to the location of important objects and the detection of changes in motivational contingencies that may alter future behavior. Here we addressed the relationship between these two processes by measuring the blood-oxygenation-level-dependent (BOLD) signal during a visual search task in which the location and the color of a salient cue respectively indicated where a rewarded target would appear and the monetary gain (large or small) associated with its detection. While cues that either shifted or maintained attention were presented every 4 to 8 seconds, the reward magnitude indicated by the cue changed roughly every 30 seconds, allowing us to distinguish a change in expected reward magnitude from a maintained state of expected reward magnitude. Posterior cingulate cortex was modulated by cues signaling an increase in expected reward magnitude, but not by cues for shifting versus maintaining spatial attention. Dorsal fronto-parietal regions in precuneus and FEF also showed increased BOLD activity for changes in expected reward magnitude from low to high, but in addition showed large independent modulations for shifting versus maintaining attention. In particular, the differential activation for shifting versus maintaining attention was not affected by expected reward magnitude. These results indicate that BOLD activations for shifts of attention and increases in expected reward magnitude are largely separate. Finally, visual cortex showed sustained spatially selective signals that were significantly enhanced when greater reward magnitude was expected, but this reward-related modulation was not observed in spatially selective regions of dorsal fronto-parietal cortex. PMID:22578709
Effects of spatial cues on color-change detection in humans
Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.
2015-01-01
Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359
Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.
2012-01-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118
Tünnermann, Jan; Scharlau, Ingrid
2016-01-01
Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.
Dissociating emotion-induced blindness and hypervision.
Bocanegra, Bruno R; Zeelenberg, René
2009-12-01
Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.
Campbell, Dana L M; Hauber, Mark E
2009-08-01
Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.
Neural substrates of resisting craving during cigarette cue exposure.
Brody, Arthur L; Mandelkern, Mark A; Olmstead, Richard E; Jou, Jennifer; Tiongson, Emmanuelle; Allen, Valerie; Scheibal, David; London, Edythe D; Monterosso, John R; Tiffany, Stephen T; Korb, Alex; Gan, Joanna J; Cohen, Mark S
2007-09-15
In cigarette smokers, the most commonly reported areas of brain activation during visual cigarette cue exposure are the prefrontal, anterior cingulate, and visual cortices. We sought to determine changes in brain activity in response to cigarette cues when smokers actively resist craving. Forty-two tobacco-dependent smokers underwent functional magnetic resonance imaging, during which they were presented with videotaped cues. Three cue presentation conditions were tested: cigarette cues with subjects allowing themselves to crave (cigarette cue crave), cigarette cues with the instruction to resist craving (cigarette cue resist), and matched neutral cues. Activation was found in the cigarette cue resist (compared with the cigarette cue crave) condition in the left dorsal anterior cingulate cortex (ACC), posterior cingulate cortex (PCC), and precuneus. Lower magnetic resonance signal for the cigarette cue resist condition was found in the cuneus bilaterally, left lateral occipital gyrus, and right postcentral gyrus. These relative activations and deactivations were more robust when the cigarette cue resist condition was compared with the neutral cue condition. Suppressing craving during cigarette cue exposure involves activation of limbic (and related) brain regions and deactivation of primary sensory and motor cortices.
Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M
2009-01-01
Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.
Visual spatial cue use for guiding orientation in two-to-three-year-old children
van den Brink, Danielle; Janzen, Gabriele
2013-01-01
In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2–3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences. PMID:24368903
Visual spatial cue use for guiding orientation in two-to-three-year-old children.
van den Brink, Danielle; Janzen, Gabriele
2013-01-01
In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.
Spielvogel, Ines; Matthes, Jörg; Naderer, Brigitte; Karsay, Kathrin
2018-06-01
Based on cue reactivity theory, food cues embedded in media content can lead to physiological and psychological responses in children. Research suggests that unhealthy food cues are represented more extensively and interactively in children's media environments than healthy ones. However, it is not clear to this date whether children react differently to unhealthy compared to healthy food cues. In an experimental study with 56 children (55.4% girls; M age = 8.00, SD = 1.58), we used eye-tracking to determine children's attention to unhealthy and healthy food cues embedded in a narrative cartoon movie. Besides varying the food type (i.e., healthy vs. unhealthy), we also manipulated the integration levels of food cues with characters (i.e., level of food integration; no interaction vs. handling vs. consumption), and we assessed children's individual susceptibility factors by measuring the impact of their hunger level. Our results indicated that unhealthy food cues attract children's visual attention to a larger extent than healthy cues. However, their initial visual interest did not differ between unhealthy and healthy food cues. Furthermore, an increase in the level of food integration led to an increase in visual attention. Our findings showed no moderating impact of hunger. We conclude that especially unhealthy food cues with an interactive connection trigger cue reactivity in children. Copyright © 2018 Elsevier Ltd. All rights reserved.
Validating Visual Cues In Flight Simulator Visual Displays
NASA Astrophysics Data System (ADS)
Aronson, Moses
1987-09-01
Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.
I can see what you are saying: Auditory labels reduce visual search times.
Cho, Kit W
2016-10-01
The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.
A Model of Manual Control with Perspective Scene Viewing
NASA Technical Reports Server (NTRS)
Sweet, Barbara Townsend
2013-01-01
A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).
Zator, Krysten; Katz, Albert N
2017-07-01
Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.
Vogel, Bastian D; Brück, Carolin; Jacob, Heike; Eberle, Mark; Wildgruber, Dirk
2016-07-07
Impaired interpretation of nonverbal emotional cues in patients with schizophrenia has been reported in several studies and a clinical relevance of these deficits for social functioning has been assumed. However, it is unclear to what extent the impairments depend on specific emotions or specific channels of nonverbal communication. Here, the effect of cue modality and emotional categories on accuracy of emotion recognition was evaluated in 21 patients with schizophrenia and compared to a healthy control group (n = 21). To this end, dynamic stimuli comprising speakers of both genders in three different sensory modalities (auditory, visual and audiovisual) and five emotional categories (happy, alluring, neutral, angry and disgusted) were used. Patients with schizophrenia were found to be impaired in emotion recognition in comparison to the control group across all stimuli. Considering specific emotions more severe deficits were revealed in the recognition of alluring stimuli and less severe deficits in the recognition of disgusted stimuli as compared to all other emotions. Regarding cue modality the extent of the impairment in emotional recognition did not significantly differ between auditory and visual cues across all emotional categories. However, patients with schizophrenia showed significantly more severe disturbances for vocal as compared to facial cues when sexual interest is expressed (alluring stimuli), whereas more severe disturbances for facial as compared to vocal cues were observed when happiness or anger is expressed. Our results confirmed that perceptual impairments can be observed for vocal as well as facial cues conveying various social and emotional connotations. The observed differences in severity of impairments with most severe deficits for alluring expressions might be related to specific difficulties in recognizing the complex social emotional information of interpersonal intentions as compared to "basic" emotional states. Therefore, future studies evaluating perception of nonverbal cues should consider a broader range of social and emotional signals beyond basic emotions including attitudes and interpersonal intentions. Identifying specific domains of social perception particularly prone for misunderstandings in patients with schizophrenia might allow for a refinement of interventions aiming at improving social functioning.
Cai, Weidong; Chen, Tianwen; Ide, Jaime S; Li, Chiang-Shan R; Menon, Vinod
2017-08-01
The ability to anticipate and detect behaviorally salient stimuli is important for virtually all adaptive behaviors, including inhibitory control that requires the withholding of prepotent responses when instructed by external cues. Although right fronto-operculum-insula (FOI), encompassing the anterior insular cortex (rAI) and inferior frontal cortex (rIFC), involvement in inhibitory control is well established, little is known about signaling mechanisms underlying their differential roles in detection and anticipation of salient inhibitory cues. Here we use 2 independent functional magnetic resonance imaging data sets to investigate dynamic causal interactions of the rAI and rIFC, with sensory cortex during detection and anticipation of inhibitory cues. Across 2 different experiments involving auditory and visual inhibitory cues, we demonstrate that primary sensory cortex has a stronger causal influence on rAI than on rIFC, suggesting a greater role for the rAI in detection of salient inhibitory cues. Crucially, a Bayesian prediction model of subjective trial-by-trial changes in inhibitory cue anticipation revealed that the strength of causal influences from rIFC to rAI increased significantly on trials in which participants had higher anticipation of inhibitory cues. Together, these results demonstrate the dissociable bottom-up and top-down roles of distinct FOI regions in detection and anticipation of behaviorally salient cues across multiple sensory modalities. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A.; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice
2017-01-01
Parkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. PMID:25765890
Simple control-theoretic models of human steering activity in visually guided vehicle control
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
Rapid neural discrimination of communicative gestures
Carlson, Thomas A.
2015-01-01
Humans are biased toward social interaction. Behaviorally, this bias is evident in the rapid effects that self-relevant communicative signals have on attention and perceptual systems. The processing of communicative cues recruits a wide network of brain regions, including mentalizing systems. Relatively less work, however, has examined the timing of the processing of self-relevant communicative cues. In the present study, we used multivariate pattern analysis (decoding) approach to the analysis of magnetoencephalography (MEG) to study the processing dynamics of social-communicative actions. Twenty-four participants viewed images of a woman performing actions that varied on a continuum of communicative factors including self-relevance (to the participant) and emotional valence, while their brain activity was recorded using MEG. Controlling for low-level visual factors, we found early discrimination of emotional valence (70 ms) and self-relevant communicative signals (100 ms). These data offer neural support for the robust and rapid effects of self-relevant communicative cues on behavior. PMID:24958087
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-01-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
NASA Astrophysics Data System (ADS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-03-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
Simon Effect with and without Awareness of the Accessory Stimulus
ERIC Educational Resources Information Center
Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena
2006-01-01
The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…
Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors
ERIC Educational Resources Information Center
Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.
2014-01-01
With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…
Atypical Visual Orienting to Gaze- and Arrow-Cues in Adults with High Functioning Autism
ERIC Educational Resources Information Center
Vlamings, Petra H. J. M.; Stauder, Johannes E. A.; van Son, Ilona A. M.; Mottron, Laurent
2005-01-01
The present study investigates visual orienting to directional cues (arrow or eyes) in adults with high functioning autism (n = 19) and age matched controls (n = 19). A choice reaction time paradigm is used in which eye-or arrow direction correctly (congruent) or incorrectly (incongruent) cues target location. In typically developing participants,…
Integration of visual and motion cues for simulator requirements and ride quality investigation
NASA Technical Reports Server (NTRS)
Young, L. R.
1976-01-01
Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.
Automaticity of phasic alertness: evidence for a three-component model of visual cueing
Lin, Zhicheng; Lu, Zhong-Lin
2017-01-01
The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue—double cue—is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue—single cue—that is being mixed (80% vs. 50% valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, top-down influences—in the form of contextual relevance and cue awareness—can have opposite influences on the cueing effect by the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention—orienting, alerting, and inhibition—to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital. PMID:27173487
Memory for Drug Related Visual Stimuli in Young Adult, Cocaine Dependent Polydrug Users
Ray, Suchismita; Pandina, Robert; Bates, Marsha E.
2015-01-01
Background and Objectives Implicit (unconscious) and explicit (conscious) memory associations with drugs have been examined primarily using verbal cues. However, drug seeking, drug use behaviors, and relapse in chronic cocaine and other drug users are frequently triggered by viewing substance related visual cues in the environment. We thus examined implicit and explicit memory for drug picture cues to understand the relative extent to which conscious and unconscious memory facilitation of visual drug cues occurs during cocaine dependence. Methods Memory for drug related and neutral picture cues was assessed in 14 inpatient cocaine dependent polydrug users and a comparison group of 21 young adults with limited drug experience (N = 35). Participants completed picture cue exposure, free recall and recognition tasks to assess explicit memory, and a repetition priming task to assess implicit memory. Results Drug cues, compared to neutral cues were better explicitly recalled and implicitly primed, and especially so in the cocaine group. In contrast, neutral cues were better explicitly recognized, and especially in the control group. Conclusion Certain forms of explicit and implicit memory for drug cues were enhanced in cocaine users compared to controls when memory was tested a short time following cue exposure. Enhanced unconscious memory processing of drug cues in chronic cocaine users may be a behavioral manifestation of heightened drug cue salience that supports drug seeking and taking. There may be value in expanding intervention techniques to utilize cocaine users’ implicit memory system. PMID:24588421
Food and conspecific chemical cues modify visual behavior of zebrafish, Danio rerio.
Stephenson, Jessica F; Partridge, Julian C; Whitlock, Kathleen E
2012-06-01
Animals use the different qualities of olfactory and visual sensory information to make decisions. Ethological and electrophysiological evidence suggests that there is cross-modal priming between these sensory systems in fish. We present the first experimental study showing that ecologically relevant chemical mixtures alter visual behavior, using adult male and female zebrafish, Danio rerio. Neutral-density filters were used to attenuate the light reaching the tank to an initial light intensity of 2.3×10(16) photons/s/m2. Fish were exposed to food cue and to alarm cue. The light intensity was then increased by the removal of one layer of filter (nominal absorbance 0.3) every minute until, after 10 minutes, the light level was 15.5×10(16) photons/s/m2. Adult male and female zebrafish responded to a moving visual stimulus at lower light levels if they had been first exposed to food cue, or to conspecific alarm cue. These results suggest the need for more integrative studies of sensory biology.
Mackrous, I; Simoneau, M
2011-11-10
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Hu, Bin; Yue, Shigang; Zhang, Zhuhong
All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.
Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.
2017-01-01
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797
Smith, Tim J.; Senju, Atsushi
2017-01-01
While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue–reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue–reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. PMID:28250186
Working memory enhances visual perception: evidence from signal detection analysis.
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W
2010-03-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.
Liu, Sisi; Liu, Duo; Pan, Zhihui; Xu, Zhengye
2018-03-25
A growing body of research suggests that visual-spatial attention is important for reading achievement. However, few studies have been conducted in non-alphabetic orthographies. This study extended the current research to reading development in Chinese, a logographic writing system known for its visual complexity. Eighty Hong Kong Chinese children were selected and divided into poor reader and typical reader groups, based on their performance on the measures of reading fluency, Chinese character reading, and reading comprehension. The poor and typical readers were matched on age and nonverbal intelligence. A Posner's spatial cueing task was adopted to measure the exogenous and endogenous orienting of visual-spatial attention. Although the typical readers showed the cueing effect in the central cue condition (i.e., responses to targets following valid cues were faster than those to targets following invalid cues), the poor readers did not respond differently in valid and invalid conditions, suggesting an impairment of the endogenous orienting of attention. The two groups, however, showed a similar cueing effect in the peripheral cue condition, indicating intact exogenous orienting in the poor readers. These findings generally supported a link between the orienting of covert attention and Chinese reading, providing evidence for the attentional-deficit theory of dyslexia. Copyright © 2018 John Wiley & Sons, Ltd.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Motivation and short-term memory in visual search: Attention's accelerator revisited.
Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton
2018-05-01
A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.
A systematic comparison between visual cues for boundary detection.
Mély, David A; Kim, Junkyung; McGill, Mason; Guo, Yuliang; Serre, Thomas
2016-03-01
The detection of object boundaries is a critical first step for many visual processing tasks. Multiple cues (we consider luminance, color, motion and binocular disparity) available in the early visual system may signal object boundaries but little is known about their relative diagnosticity and how to optimally combine them for boundary detection. This study thus aims at understanding how early visual processes inform boundary detection in natural scenes. We collected color binocular video sequences of natural scenes to construct a video database. Each scene was annotated with two full sets of ground-truth contours (one set limited to object boundaries and another set which included all edges). We implemented an integrated computational model of early vision that spans all considered cues, and then assessed their diagnosticity by training machine learning classifiers on individual channels. Color and luminance were found to be most diagnostic while stereo and motion were least. Combining all cues yielded a significant improvement in accuracy beyond that of any cue in isolation. Furthermore, the accuracy of individual cues was found to be a poor predictor of their unique contribution for the combination. This result suggested a complex interaction between cues, which we further quantified using regularization techniques. Our systematic assessment of the accuracy of early vision models for boundary detection together with the resulting annotated video dataset should provide a useful benchmark towards the development of higher-level models of visual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis
2009-12-01
We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.
The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.
English, Brittney A; Howard, Ayanna M
2017-07-01
In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.
St Jacques, Peggy L; Conway, Martin A; Cabeza, Roberto
2011-10-01
Gender differences are frequently observed in autobiographical memory (AM). However, few studies have investigated the neural basis of potential gender differences in AM. In the present functional MRI (fMRI) study we investigated gender differences in AMs elicited using dynamic visual images vs verbal cues. We used a novel technology called a SenseCam, a wearable device that automatically takes thousands of photographs. SenseCam differs considerably from other prospective methods of generating retrieval cues because it does not disrupt the ongoing experience. This allowed us to control for potential gender differences in emotional processing and elaborative rehearsal, while manipulating how the AMs were elicited. We predicted that males would retrieve more richly experienced AMs elicited by the SenseCam images vs the verbal cues, whereas females would show equal sensitivity to both cues. The behavioural results indicated that there were no gender differences in subjective ratings of reliving, importance, vividness, emotion, and uniqueness, suggesting that gender differences in brain activity were not due to differences in these measures of phenomenological experience. Consistent with our predictions, the fMRI results revealed that males showed a greater difference in functional activity associated with the rich experience of SenseCam vs verbal cues, than did females.
Gema Díaz-Blancat; Juan García-Prieto; Fernando Maestú; Francisco Barceló
2018-05-01
One common assumption has been that prefrontal executive control is mostly required for target detection (Posner and Petersen in Ann Rev Neurosci 13:25-42, 1990). Alternatively, cognitive control has also been related to anticipatory updating of task-set (contextual) information, a view that highlights proactive control processes. Frontoparietal cortical networks contribute to both proactive control and reactive target detection, although their fast dynamics are still largely unexplored. To examine this, we analyzed rapid magnetoencephalographic (MEG) source activations elicited by task cues and target cards in a task-cueing analogue of the Wisconsin Card Sorting Test. A single-task (color sorting) condition with equivalent perceptual and motor demands was used as a control. Our results revealed fast, transient and largely switch-specific MEG activations across frontoparietal and cingulo-opercular regions in anticipation of target cards, including (1) early (100-200 ms) cue-locked MEG signals at visual, temporo-parietal and prefrontal cortices of the right hemisphere (i.e., calcarine sulcus, precuneus, inferior frontal gyrus, anterior insula and supramarginal gyrus); and (2) later cue-locked MEG signals at the right anterior and posterior insula (200-300 ms) and the left temporo-parietal junction (300-500 ms). In all cases larger MEG signal intensity was observed in switch relative to repeat cueing conditions. Finally, behavioral restart costs and test scores of working memory capacity (forward digit span) correlated with cue-locked MEG activations at key nodes of the frontoparietal network. Together, our findings suggest that proactive cognitive control of task rule updating can be fast and transiently implemented within less than a second and in anticipation of target detection.
Visual landmarks facilitate rodent spatial navigation in virtual reality environments
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484
Making the invisible visible: verbal but not visual cues enhance visual detection.
Lupyan, Gary; Spivey, Michael J
2010-07-07
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Orienting attention within visual short-term memory: development and mechanisms.
Shimi, Andria; Nobre, Anna C; Astle, Duncan; Scerif, Gaia
2014-01-01
How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to encoding or during maintenance. Cues improved memory regardless of their position, but younger children benefited less from cues presented during maintenance, and these benefits related to VSTM span over and above basic memory in uncued trials. In Experiment 2, cues of low validity eliminated benefits, suggesting that even the youngest children use cues voluntarily, rather than automatically. These findings elucidate the close coupling between developing visuospatial attentional control and VSTM. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Spisak, Brian R; Dekker, Peter H; Krüger, Max; van Vugt, Mark
2012-01-01
This paper examines the impact of facial cues on leadership emergence. Using evolutionary social psychology, we expand upon implicit and contingent theories of leadership and propose that different types of intergroup relations elicit different implicit cognitive leadership prototypes. It is argued that a biologically based hormonal connection between behavior and corresponding facial characteristics interacts with evolutionarily consistent social dynamics to influence leadership emergence. We predict that masculine-looking leaders are selected during intergroup conflict (war) and feminine-looking leaders during intergroup cooperation (peace). Across two experiments we show that a general categorization of leader versus nonleader is an initial implicit requirement for emergence, and at a context-specific level facial cues of masculinity and femininity contingently affect war versus peace leadership emergence in the predicted direction. In addition, we replicate our findings in Experiment 1 across culture using Western and East Asian samples. In Experiment 2, we also show that masculine-feminine facial cues are better predictors of leadership than male-female cues. Collectively, our results indicate a multi-level classification of context-specific leadership based on visual cues imbedded in the human face and challenge traditional distinctions of male and female leadership.
Spisak, Brian R.; Dekker, Peter H.; Krüger, Max; van Vugt, Mark
2012-01-01
This paper examines the impact of facial cues on leadership emergence. Using evolutionary social psychology, we expand upon implicit and contingent theories of leadership and propose that different types of intergroup relations elicit different implicit cognitive leadership prototypes. It is argued that a biologically based hormonal connection between behavior and corresponding facial characteristics interacts with evolutionarily consistent social dynamics to influence leadership emergence. We predict that masculine-looking leaders are selected during intergroup conflict (war) and feminine-looking leaders during intergroup cooperation (peace). Across two experiments we show that a general categorization of leader versus nonleader is an initial implicit requirement for emergence, and at a context-specific level facial cues of masculinity and femininity contingently affect war versus peace leadership emergence in the predicted direction. In addition, we replicate our findings in Experiment 1 across culture using Western and East Asian samples. In Experiment 2, we also show that masculine-feminine facial cues are better predictors of leadership than male-female cues. Collectively, our results indicate a multi-level classification of context-specific leadership based on visual cues imbedded in the human face and challenge traditional distinctions of male and female leadership. PMID:22276190
Dynamic lens and monovision 3D displays to improve viewer comfort.
Johnson, Paul V; Parnell, Jared Aq; Kim, Joohwan; Saunter, Christopher D; Love, Gordon D; Banks, Martin S
2016-05-30
Stereoscopic 3D (S3D) displays provide an additional sense of depth compared to non-stereoscopic displays by sending slightly different images to the two eyes. But conventional S3D displays do not reproduce all natural depth cues. In particular, focus cues are incorrect causing mismatches between accommodation and vergence: The eyes must accommodate to the display screen to create sharp retinal images even when binocular disparity drives the eyes to converge to other distances. This mismatch causes visual discomfort and reduces visual performance. We propose and assess two new techniques that are designed to reduce the vergence-accommodation conflict and thereby decrease discomfort and increase visual performance. These techniques are much simpler to implement than previous conflict-reducing techniques. The first proposed technique uses variable-focus lenses between the display and the viewer's eyes. The power of the lenses is yoked to the expected vergence distance thereby reducing the mismatch between vergence and accommodation. The second proposed technique uses a fixed lens in front of one eye and relies on the binocularly fused percept being determined by one eye and then the other, depending on simulated distance. We conducted performance tests and discomfort assessments with both techniques and compared the results to those of a conventional S3D display. The first proposed technique, but not the second, yielded clear improvements in performance and reductions in discomfort. This dynamic-lens technique therefore offers an easily implemented technique for reducing the vergence-accommodation conflict and thereby improving viewer experience.
Willander, Johan; Sikström, Sverker; Karlsson, Kristina
2015-01-01
Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.
Hébert, Marie; Bulla, Jan; Vivien, Denis; Agin, Véronique
2017-01-01
Animals use distal and proximal visual cues to accurately navigate in their environment, with the possibility of the occurrence of associative mechanisms such as cue competition as previously reported in honey-bees, rats, birds and humans. In this pilot study, we investigated one of the most common forms of cue competition, namely the overshadowing effect, between visual landmarks during spatial learning in mice. To this end, C57BL/6J × Sv129 mice were given a two-trial place recognition task in a T-maze, based on a novelty free-choice exploration paradigm previously developed to study spatial memory in rodents. As this procedure implies the use of different aspects of the environment to navigate (i.e., mice can perceive from each arm of the maze), we manipulated the distal and proximal visual landmarks during both the acquisition and retrieval phases. Our prospective findings provide a first set of clues in favor of the occurrence of an overshadowing between visual cues during a spatial learning task in mice when both types of cues are of the same modality but at varying distances from the goal. In addition, the observed overshadowing seems to be non-reciprocal, as distal visual cues tend to overshadow the proximal ones when competition occurs, but not vice versa. The results of the present study offer a first insight about the occurrence of associative mechanisms during spatial learning in mice, and may open the way to promising new investigations in this area of research. Furthermore, the methodology used in this study brings a new, useful and easy-to-use tool for the investigation of perceptive, cognitive and/or attentional deficits in rodents. PMID:28634446
Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.
2014-01-01
This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804
Crajé, Céline; Santello, Marco; Gordon, Andrew M
2013-01-01
Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.
Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C
2014-01-01
This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.
2006-08-01
Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition
Enhancing visual search abilities of people with intellectual disabilities.
Li-Tsang, Cecilia W P; Wong, Jackson K K
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Working memory load and the retro-cue effect: A diffusion model account.
Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S
2018-02-01
Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cue-recruitment for extrinsic signals after training with low information stimuli.
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T
2014-01-01
Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.
Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly
2017-08-16
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Signal enhancement, not active suppression, follows the contingent capture of visual attention.
Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J
2017-02-01
Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter
2018-01-01
Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512
Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.
Chang, Acer Y C; Kanai, Ryota; Seth, Anil K
2015-01-01
Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
Briand, K A; Klein, R M
1987-05-01
In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Toward a New Theory for Selecting Instructional Visuals.
ERIC Educational Resources Information Center
Croft, Richard S.; Burton, John K.
This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
ERIC Educational Resources Information Center
Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão
2017-01-01
This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…
DOT National Transportation Integrated Search
1978-03-01
At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...
Listeners' expectation of room acoustical parameters based on visual cues
NASA Astrophysics Data System (ADS)
Valente, Daniel L.
Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
Anticipatory neural dynamics of spatial-temporal orienting of attention in younger and older adults.
Heideman, Simone G; Rohenkohl, Gustavo; Chauvin, Joshua J; Palmer, Clare E; van Ede, Freek; Nobre, Anna C
2018-05-04
Spatial and temporal expectations act synergistically to facilitate visual perception. In the current study, we sought to investigate the anticipatory oscillatory markers of combined spatial-temporal orienting and to test whether these decline with ageing. We examined anticipatory neural dynamics associated with joint spatial-temporal orienting of attention using magnetoencephalography (MEG) in both younger and older adults. Participants performed a cued covert spatial-temporal orienting task requiring the discrimination of a visual target. Cues indicated both where and when targets would appear. In both age groups, valid spatial-temporal cues significantly enhanced perceptual sensitivity and reduced reaction times. In the MEG data, the main effect of spatial orienting was the lateralised anticipatory modulation of posterior alpha and beta oscillations. In contrast to previous reports, this modulation was not attenuated in older adults; instead it was even more pronounced. The main effect of temporal orienting was a bilateral suppression of posterior alpha and beta oscillations. This effect was restricted to younger adults. Our results also revealed a striking interaction between anticipatory spatial and temporal orienting in the gamma-band (60-75 Hz). When considering both age groups separately, this effect was only clearly evident and only survived statistical evaluation in the older adults. Together, these observations provide several new insights into the neural dynamics supporting separate as well as combined effects of spatial and temporal orienting of attention, and suggest that different neural dynamics associated with attentional orienting appear differentially sensitive to ageing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Rapid formation of spatiotopic representations as revealed by inhibition of return.
Pertzov, Yoni; Zohary, Ehud; Avidan, Galia
2010-06-30
Inhibition of return (IOR), a performance decrement for stimuli appearing at recently cued locations, occurs when the target and cue share the same screen position. This is in contrast to cue-based attention facilitation effects that were recently suggested to be mapped in a retinotopic reference frame, the prevailing representation throughout early visual processing stages. Here, we investigate the dynamics of IOR in both reference frames, using a modified cued-location saccadic reaction time task with an intervening saccade between cue and target presentation. Thus, on different trials, the target was present either at the same retinotopic location as the cue, or at the same screen position (e.g., spatiotopic location). IOR was primarily found for targets appearing at the same spatiotopic position as the initial cue, when the cue and target were presented at the same hemifield. This suggests that there is restricted information transfer of cue position across the two hemispheres. Moreover, the effect was maximal when the target was presented 10 ms after the intervening saccade ended and was attenuated in longer delays. In our case, therefore, the representation of previously attended locations (as revealed by IOR) is not remapped slowly after the execution of a saccade. Rather, either a retinotopic representation is remapped rapidly, adjacent to the end of the saccade (using a prospective motor command), or the positions of the cue and target are encoded in a spatiotopic reference frame, regardless of eye position. Spatial attention can therefore be allocated to target positions defined in extraretinal coordinates.
Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention
Graham, Reiko; LaBar, Kevin S.
2012-01-01
The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic versus static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. PMID:22285906
Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415
Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Blanchfield, Anthony; Hardy, James; Marcora, Samuele
2014-01-01
The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014
'You see?' Teaching and learning how to interpret visual cues during surgery.
Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei
2015-11-01
The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.
Graci, Valentina
2011-10-01
It has been previously suggested that coupled upper and limb movements need visuomotor coordination to be achieved. Previous studies have not investigated the role that visual cues may play in the coordination of locomotion and prehension. The aim of this study was to investigate if lower peripheral visual cues provide online control of the coordination of locomotion and prehension as they have been showed to do during adaptive gait and level walking. Twelve subjects reached a semi-empty or a full glass with their dominant or non-dominant hand at gait termination. Two binocular visual conditions were investigated: normal vision and lower visual occlusion. Outcome measures were determined using 3D motion capture techniques. Results showed that although the subjects were able to successfully complete the task without spilling the water from the glass under lower visual occlusion, they increased the margin of safety between final foot placements and glass. These findings suggest that lower visual cues are mainly used online to fine tune the trajectory of the upper and lower limbs moving toward the target. Copyright © 2011 Elsevier B.V. All rights reserved.
Tosoni, Annalisa; Shulman, Gordon L; Pope, Anna L W; McAvoy, Mark P; Corbetta, Maurizio
2013-06-01
Success in a dynamically changing world requires both rapid shifts of attention to the location of important objects and the detection of changes in motivational contingencies that may alter future behavior. Here we addressed the relationship between these two processes by measuring the blood-oxygenation-level-dependent (BOLD) signal during a visual search task in which the location and the color of a salient cue respectively indicated where a rewarded target would appear and the monetary gain (large or small) associated with its detection. While cues that either shifted or maintained attention were presented every 4 to 8 sec, the reward magnitude indicated by the cue changed roughly every 30 sec, allowing us to distinguish a change in expected reward magnitude from a maintained state of expected reward magnitude. Posterior cingulate cortex was modulated by cues signaling an increase in expected reward magnitude, but not by cues for shifting versus maintaining spatial attention. Dorsal fronto-parietal regions in precuneus and frontal eye field (FEF) also showed increased BOLD activity for changes in expected reward magnitude from low to high, but in addition showed large independent modulations for shifting versus maintaining attention. In particular, the differential activation for shifting versus maintaining attention was not affected by expected reward magnitude. These results indicate that BOLD activations for shifts of attention and increases in expected reward magnitude are largely separate. Finally, visual cortex showed sustained spatially selective signals that were significantly enhanced when greater reward magnitude was expected, but this reward-related modulation was not observed in spatially selective regions of dorsal fronto-parietal cortex. Copyright © 2012 Elsevier Ltd. All rights reserved.
Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.
McDaniel, Jena; Camarata, Stephen; Yoder, Paul
2018-05-15
Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.
ERIC Educational Resources Information Center
Lewkowicz, David J.
2003-01-01
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
Albert (Bud) Mayfield; Cavell Brownie
2013-01-01
The redbay ambrosia beetle (Syleborus glabratus Eichhoff) is an invasive pest and vector of the pathogen that causes laurel wilt disease in Lauraceous tree species in the eastern United States. This insect uses olfactory cues during host finding, but use of visual cues by X. Glabratus has not been previously investigated and may help explain diameter...
All I saw was the cake. Hunger effects on attentional capture by visual food cues.
Piech, Richard M; Pastorino, Michael T; Zald, David H
2010-06-01
While effects of hunger on motivation and food reward value are well-established, far less is known about the effects of hunger on cognitive processes. Here, we deployed the emotional blink of attention paradigm to investigate the impact of visual food cues on attentional capture under conditions of hunger and satiety. Participants were asked to detect targets which appeared in a rapid visual stream after different types of task irrelevant distractors. We observed that food stimuli acquired increased power to capture attention and prevent target detection when participants were hungry. This occurred despite monetary incentives to perform well. Our findings suggest an attentional mechanism through which hunger heightens perception of food cues. As an objective behavioral marker of the attentional sensitivity to food cues, the emotional attentional blink paradigm may provide a useful technique for studying individual differences, and state manipulations in the sensitivity to food cues. Published by Elsevier Ltd.
Cues used by the black fly, Simulium annulus, for attraction to the common loon (Gavia immer).
Weinandt, Meggin L; Meyer, Michael; Strand, Mac; Lindsay, Alec R
2012-12-01
The parasitic relationship between a black fly, Simulium annulus, and the common loon (Gavia immer) has been considered one of the most exclusive relationships between any host species and a black fly species. To test the host specificity of this blood-feeding insect, we made a series of bird decoy presentations to black flies on loon-inhabited lakes in northern Wisconsin, U.S.A. To examine the importance of chemical and visual cues for black fly detection of and attraction to hosts, we made decoy presentations with and without chemical cues. Flies attracted to the decoys were collected, identified to species, and quantified. Results showed that S. annulus had a strong preference for common loon visual and chemical cues, although visual cues from Canada geese (Branta canadensis) and mallards (Anas platyrynchos) did attract some flies in significantly smaller numbers. © 2012 The Society for Vector Ecology.
Working memory can enhance unconscious visual perception.
Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying
2012-06-01
We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.
The Effect of Eye Contact Is Contingent on Visual Awareness
Xu, Shan; Zhang, Shen; Geng, Haiyan
2018-01-01
The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1) or subliminal (Experiment 2) eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition. PMID:29467703
Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B
2008-09-01
We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.
Angular Declination and the Dynamic Perception of Egocentric Distance
Gajewski, Daniel A.; Philbeck, John W.; Wirtz, Philip W.; Chichka, David
2014-01-01
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36–220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220 ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15 second preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented. PMID:24099588
Food Avoidance Learning in Squirrel Monkeys and Common Marmosets
Laska, Matthias; Metzker, Karin
1998-01-01
Using a conditioned food avoidance learning paradigm, six squirrel monkeys (Saimiri sciureus) and six common marmosets (Callithrix jacchus) were tested for their ability to (1) reliably form associations between visual or olfactory cues of a potential food and its palatability and (2) remember such associations over prolonged periods of time. We found (1) that at the group level both species showed one-trial learning with the visual cues color and shape, whereas only the marmosets were able to do so with the olfactory cue, (2) that all individuals from both species learned to reliably avoid the unpalatable food items within 10 trials, (3) a tendency in both species for quicker acquisition of the association with the visual cues compared with the olfactory cue, (4) a tendency for quicker acquisition and higher reliability of the aversion by the marmosets compared with the squirrel monkeys, and (5) that all individuals from both species were able to reliably remember the significance of the visual cues, color and shape, even after 4 months, whereas only the marmosets showed retention of the significance of the olfactory cues for up to 4 weeks. Furthermore, the results suggest that in both species tested, illness is not a necessary prerequisite for food avoidance learning but that the presumably innate rejection responses toward highly concentrated but nontoxic bitter and sour tastants are sufficient to induce robust learning and retention. PMID:10454364
Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.
Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R
2014-01-01
Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.
Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection
Lupyan, Gary; Spivey, Michael J.
2010-01-01
Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
Saunders, Jeffrey A.
2014-01-01
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
A comparison of visuomotor cue integration strategies for object placement and prehension.
Greenwald, Hal S; Knill, David C
2009-01-01
Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.
Lee, Sungkyoung; Cappella, Joseph N.
2014-01-01
Findings from previous studies on smoking cues and argument strength in antismoking messages have shown that the presence of smoking cues undermines the persuasiveness of antismoking public service announcements (PSAs) with weak arguments. This study conceptualized smoking cues (i.e., scenes showing smoking-related objects and behaviors) as stimuli motivationally relevant to the former smoker population and examined how smoking cues influence former smokers’ processing of antismoking PSAs. Specifically, by defining smoking cues and the strength of antismoking arguments in terms of resource allocation, this study examined former smokers’ recognition accuracy, memory strength, and memory judgment of visual (i.e., scenes excluding smoking cues) and audio information from antismoking PSAs. In line with previous findings, the results of the study showed that the presence of smoking cues undermined former smokers’ encoding of antismoking arguments, which includes the visual and audio information that compose the main content of antismoking messages. PMID:25477766
Audio-visual speech perception: a developmental ERP investigation
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Do cattle (Bos taurus) retain an association of a visual cue with a food reward for a year?
Hirata, Masahiko; Takeno, Nozomi
2014-06-01
Use of visual cues to locate specific food resources from a distance is a critical ability of animals foraging in a spatially heterogeneous environment. However, relatively little is known about how long animals can retain the learned cue-reward association without reinforcement. We compared feeding behavior of experienced and naive Japanese Black cows (Bos taurus) in discovering food locations in a pasture. Experienced animals had been trained to respond to a visual cue (plastic washtub) for a preferred food (grain-based concentrate) 1 year prior to the experiment, while naive animals had no exposure to the cue. Cows were tested individually in a test arena including tubs filled with the concentrate on three successive days (Days 1-3). Experienced cows located the first tub more quickly and visited more tubs than naive cows on Day 1 (usually P < 0.05), but these differences disappeared on Days 2 and 3. The performance of experienced cows tended to increase from Day 1 to Day 2 and level off thereafter. Our results suggest that Japanese Black cows can associate a visual cue with a food reward within a day and retain the association for 1 year despite a slight decay. © 2014 Japanese Society of Animal Science.
Exogenous temporal cues enhance recognition memory in an object-based manner.
Ohyama, Junji; Watanabe, Katsumi
2010-11-01
Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.
Object based implicit contextual learning: a study of eye movements.
van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel
2011-02-01
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.
Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly
2017-01-01
Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550
van Weert, Julia C M; van Noort, Guda; Bol, Nadine; van Dijk, Liset; Tates, Kiek; Jansen, Jesse
2011-09-01
This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. An experiment using a 2 (complex vs. non-complex language)×3 (text only vs. photograph vs. drawing) factorial design was conducted. In total, 200 respondents without cancer were exposed to one of the six conditions. Respondents were more satisfied with the comprehensibility of both websites when they were presented with a visual cue. A significant interaction effect was found between language complexity and photograph use such that satisfaction with comprehensibility improved when a photograph was added to the complex language condition. Next, an interaction effect was found between age and satisfaction, which indicates that adding a visual cue is more important for older adults than younger adults. Finally, respondents who were exposed to a website with less complex language showed higher recall scores. The use of visual cues enhances satisfaction with the information presented on the website, and the use of non-complex language improves recall. The results of the current study can be used to improve computer-based information systems for patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Working memory dependence of spatial contextual cueing for visual search.
Pollmann, Stefan
2018-05-10
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.
Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2012-10-25
In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.
Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion
Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard
2016-01-01
The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411
Visually-induced reorientation illusions as a function of age.
Howard, I P; Jenkin, H L; Hu, G
2000-09-01
We reported previously that supine subjects inside a furnished room who are tilted 90 degrees may experience themselves and the room as upright to gravity. We call this the levitation illusion because it creates sensations similar to those experienced in weightlessness. It is an example of a larger class of novel static reorientation illusions that we have explored. Stationary subjects inside a furnished room rotating about a horizontal axis experience complete self rotation about the roll or pitch axis. We call this a dynamic reorientation illusion. We have determined the incidence of static and dynamic reorientation illusions in subjects ranging in age from 9 to 78 yr. Some 90% of subjects of all ages experienced the dynamic reorientation illusion but the percentage of subjects experiencing static reorientation illusions increased with age. We propose that the dynamic illusion depends on a primitive mechanism of visual-vestibular interaction but that static reorientation illusions depend on learned visual cues to the vertical arising from the perceived tops and bottoms of familiar objects and spatial relationships between objects. Older people become more dependent on visual polarity to compensate for loss in vestibular sensitivity. Of 9 astronauts, 4 experienced the levitation illusion. The relationship between susceptibility to reorientation illusions on Earth and in space has still to be determined. We propose that the Space Station will be less disorienting if pictures of familiar objects line the walls.
Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis
ERIC Educational Resources Information Center
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
2010-01-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
Visual cues and listening effort: individual variability.
Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y
2011-10-01
To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.
Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A
2016-06-01
New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.
Floral reward, advertisement and attractiveness to honey bees in dioecious Salix caprea.
Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor
2014-01-01
In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants.
Field Assessment of the Predation Risk - Food Availability Trade-Off in Crab Megalopae Settlement
Tapia-Lewin, Sebastián; Pardo, Luis Miguel
2014-01-01
Settlement is a key process for meroplanktonic organisms as it determines distribution of adult populations. Starvation and predation are two of the main mortality causes during this period; therefore, settlement tends to be optimized in microhabitats with high food availability and low predator density. Furthermore, brachyuran megalopae actively select favorable habitats for settlement, via chemical, visual and/or tactile cues. The main objective in this study was to assess the settlement of Metacarcinus edwardsii and Cancer plebejus under different combinations of food availability levels and predator presence. We determined, in the field, which factor is of greater relative importance when choosing a suitable microhabitat for settling. Passive larval collectors were deployed, crossing different scenarios of food availability and predator presence. We also explore if megalopae actively choose predator-free substrates in response to visual and/or chemical cues. We tested the response to combined visual and chemical cues and to each individually. Data was tested using a two-way factorial design ANOVA. In both species, food did not cause significant effect on settlement success, but predator presence did, therefore there was not trade-off in this case and megalopae respond strongly to predation risk by active aversion. Larvae of M. edwardsii responded to chemical and visual cues simultaneously, but there was no response to either cue by itself. Statistically, C. plebejus did not exhibit a differential response to cues, but reacted with a strong similar tendency as M. edwardsii. We concluded that crab megalopae actively select predator-free microhabitat, independently of food availability, using chemical and visual cues combined. The findings in this study highlight the great relevance of predation on the settlement process and recruitment of marine invertebrates with complex life cycles. PMID:24748151
Floral Reward, Advertisement and Attractiveness to Honey Bees in Dioecious Salix caprea
Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor
2014-01-01
In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants. PMID:24676333
Visual cues that are effective for contextual saccade adaptation
Azadi, Reza
2014-01-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429
Retro-dimension-cue benefit in visual working memory.
Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang
2016-10-24
In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.
Retro-dimension-cue benefit in visual working memory
Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang
2016-01-01
In visual working memory (VWM) tasks, participants’ performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants’ performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis. PMID:27774983
Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.
2016-01-01
Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Retrospective attention enhances visual working memory in the young but not the old: an ERP study
Duarte, Audrey; Hearons, Patricia; Jiang, Yashu; Delvin, Mary Courtney; Newsome, Rachel N.; Verhaeghen, Paul
2013-01-01
Behavioral evidence from the young suggests spatial cues that orient attention toward task relevant items in visual working memory (VWM) enhance memory capacity. Whether older adults can also use retrospective cues (“retro-cues”) to enhance VWM capacity is unknown. In the current event-related potential (ERP) study, young and old adults performed a VWM task in which spatially informative retro-cues were presented during maintenance. Young but not older adults’ VWM capacity benefitted from retro-cueing. The contralateral delay activity (CDA) ERP index of VWM maintenance was attenuated after the retro-cue, which effectively reduced the impact of memory load. CDA amplitudes were reduced prior to retro-cue onset in the old only. Despite a preserved ability to delete items from VWM, older adults may be less able to use retrospective attention to enhance memory capacity when expectancy of impending spatial cues disrupts effective VWM maintenance. PMID:23445536
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Oh, Hwamee; Leung, Hoi-Chung
2010-02-01
In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2018-04-25
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.
1996-04-01
This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Static and Dynamic Facial Cues Differentially Affect the Consistency of Social Evaluations.
Hehman, Eric; Flake, Jessica K; Freeman, Jonathan B
2015-08-01
Individuals are quite sensitive to others' appearance cues when forming social evaluations. Cues such as facial emotional resemblance are based on facial musculature and thus dynamic. Cues such as a face's structure are based on the underlying bone and are thus relatively static. The current research examines the distinction between these types of facial cues by investigating the consistency in social evaluations arising from dynamic versus static cues. Specifically, across four studies using real faces, digitally generated faces, and downstream behavioral decisions, we demonstrate that social evaluations based on dynamic cues, such as intentions, have greater variability across multiple presentations of the same identity than do social evaluations based on static cues, such as ability. Thus, although evaluations of intentions vary considerably across different instances of a target's face, evaluations of ability are relatively fixed. The findings highlight the role of facial cues' consistency in the stability of social evaluations. © 2015 by the Society for Personality and Social Psychology, Inc.
Cortical activity during cued picture naming predicts individual differences in stuttering frequency
Mock, Jeffrey R.; Foundas, Anne L.; Golob, Edward J.
2016-01-01
Objective Developmental stuttering is characterized by fluent speech punctuated by stuttering events, the frequency of which varies among individuals and contexts. Most stuttering events occur at the beginning of an utterance, suggesting neural dynamics associated with stuttering may be evident during speech preparation. Methods This study used EEG to measure cortical activity during speech preparation in men who stutter, and compared the EEG measures to individual differences in stuttering rate as well as to a fluent control group. Each trial contained a cue followed by an acoustic probe at one of two onset times (early or late), and then a picture. There were two conditions: a speech condition where cues induced speech preparation of the picture’s name and a control condition that minimized speech preparation. Results Across conditions stuttering frequency correlated to cue-related EEG beta power and auditory ERP slow waves from early onset acoustic probes. Conclusions The findings reveal two new cortical markers of stuttering frequency that were present in both conditions, manifest at different times, are elicited by different stimuli (visual cue, auditory probe), and have different EEG responses (beta power, ERP slow wave). Significance The cue-target paradigm evoked brain responses that correlated to pre-experimental stuttering rate. PMID:27472545
Mock, Jeffrey R; Foundas, Anne L; Golob, Edward J
2016-09-01
Developmental stuttering is characterized by fluent speech punctuated by stuttering events, the frequency of which varies among individuals and contexts. Most stuttering events occur at the beginning of an utterance, suggesting neural dynamics associated with stuttering may be evident during speech preparation. This study used EEG to measure cortical activity during speech preparation in men who stutter, and compared the EEG measures to individual differences in stuttering rate as well as to a fluent control group. Each trial contained a cue followed by an acoustic probe at one of two onset times (early or late), and then a picture. There were two conditions: a speech condition where cues induced speech preparation of the picture's name and a control condition that minimized speech preparation. Across conditions stuttering frequency correlated to cue-related EEG beta power and auditory ERP slow waves from early onset acoustic probes. The findings reveal two new cortical markers of stuttering frequency that were present in both conditions, manifest at different times, are elicited by different stimuli (visual cue, auditory probe), and have different EEG responses (beta power, ERP slow wave). The cue-target paradigm evoked brain responses that correlated to pre-experimental stuttering rate. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sweeney, Christopher; Bunnell, John; Chung, William; Giovannetti, Dean; Mikula, Julie; Nicholson, Bob; Roscoe, Mike
2001-01-01
Joint Shipboard Helicopter Integration Process (JSHIP) is a Joint Test and Evaluation (JT&E) program sponsored by the Office of the Secretary of Defense (OSD). Under the JSHDP program is a simulation effort referred to as the Dynamic Interface Modeling and Simulation System (DIMSS). The purpose of DIMSS is to develop and test the processes and mechanisms that facilitate ship-helicopter interface testing via man-in-the-loop ground-based flight simulators. Specifically, the DIMSS charter is to develop an accredited process for using a flight simulator to determine the wind-over-the-deck (WOD) launch and recovery flight envelope for the UH-60A ship/helicopter combination. DIMSS is a collaborative effort between the NASA Ames Research Center and OSD. OSD determines the T&E and warfighter training requirements, provides the programmatics and dynamic interface T&E experience, and conducts ship/aircraft interface tests for validating the simulation. NASA provides the research and development element, simulation facility, and simulation technical experience. This paper will highlight the benefits of the NASA/JSHIP collaboration and detail achievements of the project in terms of modeling and simulation. The Vertical Motion Simulator (VMS) at NASA Ames Research Center offers the capability to simulate a wide range of simulation cueing configurations, which include visual, aural, and body-force cueing devices. The system flexibility enables switching configurations io allow back-to-back evaluation and comparison of different levels of cueing fidelity in determining minimum training requirements. The investigation required development and integration of several major simulation system at the VMS. A new UH-60A BlackHawk interchangeable cab that provides an out-the-window (OTW) field-of-view (FOV) of 220 degrees in azimuth and 70 degrees in elevation was built. Modeling efforts involved integrating Computational Fluid Dynamics (CFD) generated data of an LHA ship airwake and integrating a real-time ship motion model developed based on a batch model from Naval Surface Warfare Center. Engineering development and integration of a three degrees-of-freedom (DOF) dynamic seat to simulate high frequency rotor-dynamics dependent motion cues for use in conjunction with the large motion system was accomplished. The development of an LHA visual model in several different levels of resolution and an aural cueing system in which three separate fidelity levels could be selected were developed. VMS also integrated a PC-based E&S simFUSION system to investigate cost effective IG alternatives. The DIMSS project consists of three phases that follow an approved Validation, Verification and accreditation (VV&A) process. The first phase will support the accreditation of the individual subsystems and models. The second will follow the verification and validation of the integrated subsystems and models, and will address fidelity requirements of the integrated models and subsystems. The third and final phase will allow the verification and validation of the full system integration. This VV&A process will address the utility of the simulated WOD launch and recovery envelope. Simulations supporting the first two stages have been completed and the data is currently being reviewed and analyzed.
Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J
2011-10-01
Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.
Rapid neural discrimination of communicative gestures.
Redcay, Elizabeth; Carlson, Thomas A
2015-04-01
Humans are biased toward social interaction. Behaviorally, this bias is evident in the rapid effects that self-relevant communicative signals have on attention and perceptual systems. The processing of communicative cues recruits a wide network of brain regions, including mentalizing systems. Relatively less work, however, has examined the timing of the processing of self-relevant communicative cues. In the present study, we used multivariate pattern analysis (decoding) approach to the analysis of magnetoencephalography (MEG) to study the processing dynamics of social-communicative actions. Twenty-four participants viewed images of a woman performing actions that varied on a continuum of communicative factors including self-relevance (to the participant) and emotional valence, while their brain activity was recorded using MEG. Controlling for low-level visual factors, we found early discrimination of emotional valence (70 ms) and self-relevant communicative signals (100 ms). These data offer neural support for the robust and rapid effects of self-relevant communicative cues on behavior. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Modulation of neuronal responses during covert search for visual feature conjunctions
Buracas, Giedrius T.; Albright, Thomas D.
2009-01-01
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385
Modulation of neuronal responses during covert search for visual feature conjunctions.
Buracas, Giedrius T; Albright, Thomas D
2009-09-29
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.
ERIC Educational Resources Information Center
Srinivasan, Ravindra J.; Massaro, Dominic W.
2003-01-01
Examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. Found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. (Author/VWL)
Vestibular-visual interactions in flight simulators
NASA Technical Reports Server (NTRS)
Clark, B.
1977-01-01
The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.
Cue-induced brain activity in pathological gamblers.
Crockford, David N; Goodyear, Bradley; Edwards, Jodi; Quickfall, Jeremy; el-Guebaly, Nady
2005-11-15
Previous studies using functional magnetic resonance imaging (fMRI) have identified differential brain activity in healthy subjects performing gambling tasks and in pathological gambling (PG) subjects when exposed to motivational and emotional predecessors for gambling as well as during gambling or response inhibition tasks. The goal of the present study was to determine if PG subjects exhibit differential brain activity when exposed to visual gambling cues. Ten male DSM-IV-TR PG subjects and 10 matched healthy control subjects underwent fMRI during visual presentations of gambling-related video alternating with video of nature scenes. Pathological gambling subjects and control subjects exhibited overlap in areas of brain activity in response to the visual gambling cues; however, compared with control subjects, PG subjects exhibited significantly greater activity in the right dorsolateral prefrontal cortex (DLPFC), including the inferior and medial frontal gyri, the right parahippocampal gyrus, and left occipital cortex, including the fusiform gyrus. Pathological gambling subjects also reported a significant increase in mean craving for gambling after the study. Post hoc analyses revealed a dissociation in visual processing stream (dorsal vs. ventral) activation by subject group and cue type. These findings may represent a component of cue-induced craving for gambling or conditioned behavior that could underlie pathological gambling.
Depth reversals in stereoscopic displays driven by apparent size
NASA Astrophysics Data System (ADS)
Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.
1998-04-01
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.
Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.
2007-01-01
Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.
Modeling human perception and estimation of kinematic responses during aircraft landing
NASA Technical Reports Server (NTRS)
Schmidt, David K.; Silk, Anthony B.
1988-01-01
The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.
Age-related changes in event-cued visual and auditory prospective memory proper.
Uttl, Bob
2006-06-01
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.
Deployment of spatial attention to words in central and peripheral vision.
Ducrot, Stéphanie; Grainger, Jonathan
2007-05-01
Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.
Setting and changing feature priorities in visual short-term memory.
Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin
2017-04-01
Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Liebenthal, Einat; Silbersweig, David A.; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala—a subcortical center for emotion perception—are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states. PMID:27877106
Liebenthal, Einat; Silbersweig, David A; Stern, Emily
2016-01-01
Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.
Pitts, Brandon J; Sarter, Nadine
2018-06-01
Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.
Motor (but not auditory) attention affects syntactic choice.
Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy
2018-01-01
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Neural coding underlying the cue preference for celestial orientation
el Jundi, Basil; Warrant, Eric J.; Byrne, Marcus J.; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie
2015-01-01
Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity. PMID:26305929
Neural coding underlying the cue preference for celestial orientation.
el Jundi, Basil; Warrant, Eric J; Byrne, Marcus J; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie
2015-09-08
Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity.
Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.
Schankin, Andrea; Schubö, Anna
2009-05-01
Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.
Attentional bias in smokers: exposure to dynamic smoking cues in contemporary movies.
Lochbuehler, Kirsten; Voogd, Hubert; Scholte, Ron H J; Engels, Rutger C M E
2011-04-01
Research has shown that smokers have an attentional bias for pictorial smoking cues. The objective of the present study was to examine whether smokers also have an attentional bias for dynamic smoking cues in contemporary movies and therefore fixate more quickly, more often and for longer periods of time on dynamic smoking cues than non-smokers. By drawing upon established methods for assessing attentional biases for pictorial cues, we aimed to develop a new method for assessing attentional biases for dynamic smoking cues. We examined smokers' and non-smokers' eye movements while watching a movie clip by using eye-tracking technology. The sample consisted of 16 smoking and 17 non-smoking university students. Our results confirm the results of traditional pictorial attentional bias research. Smokers initially directed their gaze more quickly towards smoking-related cues (p = 0.01), focusing on them more often (p = 0.05) and for a longer duration (p = 0.01) compared with non-smokers. Thus, smoking cues in movies directly affect the attention of smokers. These findings indicate that the effects of dynamic smoking cues, in addition to other environmental smoking cues, need to be taken into account in smoking cessation therapies in order to increase successful smoking cessation and to prevent relapses.
Central and peripheral vision loss differentially affects contextual cueing in visual search.
Geringswald, Franziska; Pollmann, Stefan
2015-09-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
The Role of Color in Search Templates for Real-world Target Objects.
Nako, Rebecca; Smith, Tim J; Eimer, Martin
2016-11-01
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
Lower region: a new cue for figure-ground assignment.
Vecera, Shaun P; Vogel, Edward K; Woodman, Geoffrey F
2002-06-01
Figure-ground assignment is an important visual process; humans recognize, attend to, and act on figures, not backgrounds. There are many visual cues for figure-ground assignment. A new cue to figure-ground assignment, called lower region, is presented: Regions in the lower portion of a stimulus array appear more figurelike than regions in the upper portion of the display. This phenomenon was explored, and it was demonstrated that the lower-region preference is not influenced by contrast, eye movements, or voluntary spatial attention. It was found that the lower region is defined relative to the stimulus display, linking the lower-region preference to pictorial depth perception cues. The results are discussed in terms of the environmental regularities that this new figure-ground cue may reflect.
Takao, Saki; Yamani, Yusuke; Ariga, Atsunori
2018-01-01
The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457
Takao, Saki; Yamani, Yusuke; Ariga, Atsunori
2017-01-01
The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.
Sensory modality of smoking cues modulates neural cue reactivity.
Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J
2013-01-01
Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.
Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.
2014-01-01
Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110
Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L
2014-01-01
Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Short-term visual memory for location in depth: A U-shaped function of time.
Reeves, Adam; Lei, Quan
2017-10-01
Short-term visual memory was studied by displaying arrays of four or five numerals, each numeral in its own depth plane, followed after various delays by an arrow cue shown in one of the depth planes. Subjects reported the numeral at the depth cued by the arrow. Accuracy fell with increasing cue delay for the first 500 ms or so, and then recovered almost fully. This dipping pattern contrasts with the usual iconic decay observed for memory traces. The dip occurred with or without a verbal or color-shape retention load on working memory. In contrast, accuracy did not change with delay when a tonal cue replaced the arrow cue. We hypothesized that information concerning the depths of the numerals decays over time in sensory memory, but that cued recall is aided later on by transfer to a visual memory specialized for depth. This transfer is sufficiently rapid with a tonal cue to compensate for the sensory decay, but it is slowed by the need to tag the arrow cue's depth relative to the depths of the numerals, exposing a dip when sensation has decayed and transfer is not yet complete. A model with a fixed rate of sensory decay and varied transfer rates across individuals captures the dip as well as the cue modality effect.
Eye Contact Is Crucial for Referential Communication in Pet Dogs.
Savalli, Carine; Resende, Briseida; Gaunet, Florence
2016-01-01
Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.
Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.
Gobel, Matthias S; Tufft, Miles R A; Richardson, Daniel C
2018-05-01
We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.
Brielmann, Aenne A; Spering, Miriam
2015-08-01
Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).
Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder
Lundahl, Leslie H.; Greenwald, Mark K.
2016-01-01
Aims Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)–related cues in cannabis dependent volunteers. Methods 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. Results In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Conclusions Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. PMID:27436749
Lundahl, Leslie H; Greenwald, Mark K
2016-09-01
Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)-related cues in cannabis dependent volunteers. 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Encoding of reward expectation by monkey anterior insular neurons
Mizuhiki, Takashi; Richmond, Barry J.
2012-01-01
The insula, a cortical brain region that is known to encode information about autonomic, visceral, and olfactory functions, has recently been shown to encode information during reward-seeking tasks in both single neuronal recording and functional magnetic resonance imaging studies. To examine the reward-related activation, we recorded from 170 single neurons in anterior insula of 2 monkeys during a multitrial reward schedule task, where the monkeys had to complete a schedule of 1, 2, 3, or 4 trials to earn a reward. In one block of trials a visual cue indicated whether a reward would or would not be delivered in the current trial after the monkey successfully detected that a red spot turned green, and in other blocks the visual cue was random with respect to reward delivery. Over one-quarter of 131 responsive neurons were activated when the current trial would (certain or uncertain) be rewarded if performed correctly. These same neurons failed to respond in trials that were certain, as indicated by the cue, to be unrewarded. Another group of neurons responded when the reward was delivered, similar to results reported previously. The dynamics of population activity in anterior insula also showed strong signals related to knowing when a reward is coming. The most parsimonious explanation is that this activity codes for a type of expected outcome, where the expectation encompasses both certain and uncertain rewards. PMID:22402653
Freezing of Gait in Parkinson's Disease: An Overload Problem?
Beck, Eric N; Ehgoetz Martens, Kaylena A; Almeida, Quincy J
2015-01-01
Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson's disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers' gait was affected. However, when visual cues were present, freezers' gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG.
Freezing of Gait in Parkinson’s Disease: An Overload Problem?
Beck, Eric N.; Ehgoetz Martens, Kaylena A.; Almeida, Quincy J.
2015-01-01
Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson’s disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers’ gait was affected. However, when visual cues were present, freezers’ gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG. PMID:26678262
Gagliardo, A.; Odetti, F.; Ioalè, P.
2001-01-01
Whether pigeons use visual landmarks for orientation from familiar locations has been a subject of debate. By recording the directional choices of both anosmic and control pigeons while exiting from a circular arena we were able to assess the relevance of olfactory and visual cues for orientation from familiar sites. When the birds could see the surroundings, both anosmic and control pigeons were homeward oriented. When the view of the landscape was prevented by screens that surrounded the arena, the control pigeons exited from the arena approximately in the home direction, while the anosmic pigeons' distribution was not different from random. Our data suggest that olfactory and visual cues play a critical, but interchangeable, role for orientation at familiar sites. PMID:11571054
Alcohol-cue exposure effects on craving and attentional bias in underage college-student drinkers.
Ramirez, Jason J; Monti, Peter M; Colwill, Ruth M
2015-06-01
The effect of alcohol-cue exposure on eliciting craving has been well documented, and numerous theoretical models assert that craving is a clinically significant construct central to the motivation and maintenance of alcohol-seeking behavior. Furthermore, some theories propose a relationship between craving and attention, such that cue-induced increases in craving bias attention toward alcohol cues, which, in turn, perpetuates craving. This study examined the extent to which alcohol cues induce craving and bias attention toward alcohol cues among underage college-student drinkers. We designed within-subject cue-reactivity and visual-probe tasks to assess in vivo alcohol-cue exposure effects on craving and attentional bias on 39 undergraduate college drinkers (ages 18-20). Participants expressed greater subjective craving to drink alcohol following in vivo cue exposure to a commonly consumed beer compared with water exposure. Furthermore, following alcohol-cue exposure, participants exhibited greater attentional biases toward alcohol cues as measured by a visual-probe task. In addition to the cue-exposure effects on craving and attentional bias, within-subject differences in craving across sessions marginally predicted within-subject differences in attentional bias. Implications for both theory and practice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Modeling human pilot cue utilization with applications to simulator fidelity assessment.
Zeyada, Y; Hess, R A
2000-01-01
An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.
A bilateral advantage for maintaining objects in visual short term memory.
Holt, Jessica L; Delvenne, Jean-François
2015-01-01
Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754-763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127-133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1s, 3s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. Copyright © 2014 Elsevier B.V. All rights reserved.
Plotnik, Joshua M.; Pokorny, Jennifer J.; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F.; Hennessy, Alice; Hill, James; Hill, Virginia J.; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L.; Morrison, Violet M. B.; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K.; Ylli, Dora; Clayton, Nicola S.; Roberts, John; Fure, Emilie L.; Duchatelier, Alicia P.; Getz, David
2013-01-01
Recent research suggests that domesticated species – due to artificial selection by humans for specific, preferred behavioral traits – are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants’ inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation. PMID:23613804
Plotnik, Joshua M; Pokorny, Jennifer J; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F; Hennessy, Alice; Hill, James; Hill, Virginia J; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L; Morrison, Violet M B; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K; Ylli, Dora; Clayton, Nicola S; Roberts, John; Fure, Emilie L; Duchatelier, Alicia P; Getz, David
2013-01-01
Recent research suggests that domesticated species--due to artificial selection by humans for specific, preferred behavioral traits--are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants' inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation.
Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?
Carman, Heidi M; Mactutus, Charles F
2002-09-01
Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.
ERIC Educational Resources Information Center
Gawryszewski, Luiz G.; Carreiro, Luiz Renato R.; Magalhaes, Fabio V.
2005-01-01
A non-informative cue (C) elicits an inhibition of manual reaction time (MRT) to a visual target (T). We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7[degrees]) positions, the C-T angular distances (in polar…
Visual attention to food cues in obesity: an eye-tracking study.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
2014-12-01
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
Gharat, Amol; Baker, Curtis L
2017-01-25
Many of the neurons in early visual cortex are selective for the orientation of boundaries defined by first-order cues (luminance) as well as second-order cues (contrast, texture). The neural circuit mechanism underlying this selectivity is still unclear, but some studies have proposed that it emerges from spatial nonlinearities of subcortical Y cells. To understand how inputs from the Y-cell pathway might be pooled to generate cue-invariant receptive fields, we recorded visual responses from single neurons in cat Area 18 using linear multielectrode arrays. We measured responses to drifting and contrast-reversing luminance gratings as well as contrast modulation gratings. We found that a large fraction of these neurons have nonoriented responses to gratings, similar to those of subcortical Y cells: they respond at the second harmonic (F2) to high-spatial frequency contrast-reversing gratings and at the first harmonic (F1) to low-spatial frequency drifting gratings ("Y-cell signature"). For a given neuron, spatial frequency tuning for linear (F1) and nonlinear (F2) responses is quite distinct, similar to orientation-selective cue-invariant neurons. Also, these neurons respond to contrast modulation gratings with selectivity for the carrier (texture) spatial frequency and, in some cases, orientation. Their receptive field properties suggest that they could serve as building blocks for orientation-selective cue-invariant neurons. We propose a circuit model that combines ON- and OFF-center cortical Y-like cells in an unbalanced push-pull manner to generate orientation-selective, cue-invariant receptive fields. A significant fraction of neurons in early visual cortex have specialized receptive fields that allow them to selectively respond to the orientation of boundaries that are invariant to the cue (luminance, contrast, texture, motion) that defines them. However, the neural mechanism to construct such versatile receptive fields remains unclear. Using multielectrode recording, we found a large fraction of neurons in early visual cortex with receptive fields not selective for orientation that have spatial nonlinearities like those of subcortical Y cells. These are strong candidates for building cue-invariant orientation-selective neurons; we present a neural circuit model that pools such neurons in an imbalanced "push-pull" manner, to generate orientation-selective cue-invariant receptive fields. Copyright © 2017 the authors 0270-6474/17/370998-16$15.00/0.
Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech
ERIC Educational Resources Information Center
Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc
2010-01-01
In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…
Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.
Müller, Matthias M; Trautmann, Mireille; Keitel, Christian
2016-04-01
Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.
Attentional modulation of cell-class specific gamma-band synchronization in awake monkey area V4
Vinck, Martin; Womelsdorf, Thilo; Buffalo, Elizabeth A.; Desimone, Robert; Fries, Pascal
2013-01-01
Summary Selective visual attention is subserved by selective neuronal synchronization, entailing precise orchestration among excitatory and inhibitory cells. We tentatively identified these as broad (BS) and narrow spiking (NS) cells and analyzed their synchronization to the local field potential in two macaque monkeys performing a selective visual attention task. Across cells, gamma phases scattered widely but were unaffected by stimulation or attention. During stimulation, NS cells lagged BS cells on average by ~60° and gamma synchronized twice as strongly. Attention enhanced and reduced the gamma locking of strongly and weakly activated cells, respectively. During a pre-stimulus attentional cue period, BS cells showed weak gamma synchronization, while NS cells gamma synchronized as strongly as with visual stimulation. These analyses reveal the cell-type specific dynamics of the gamma cycle in macaque visual cortex and suggest that attention affects neurons differentially depending on cell type and activation level. PMID:24267656
Emotional modulation of body-selective visual areas.
Peelen, Marius V; Atkinson, Anthony P; Andersson, Frederic; Vuilleumier, Patrik
2007-12-01
Emotionally expressive faces have been shown to modulate activation in visual cortex, including face-selective regions in ventral temporal lobe. Here, we tested whether emotionally expressive bodies similarly modulate activation in body-selective regions. We show that dynamic displays of bodies with various emotional expressions vs neutral bodies, produce significant activation in two distinct body-selective visual areas, the extrastriate body area and the fusiform body area. Multi-voxel pattern analysis showed that the strength of this emotional modulation was related, on a voxel-by-voxel basis, to the degree of body selectivity, while there was no relation with the degree of selectivity for faces. Across subjects, amygdala responses to emotional bodies positively correlated with the modulation of body-selective areas. Together, these results suggest that emotional cues from body movements produce topographically selective influences on category-specific populations of neurons in visual cortex, and these increases may implicate discrete modulatory projections from the amygdala.
Visual cues that are effective for contextual saccade adaptation.
Azadi, Reza; Harwood, Mark R
2014-06-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.
Angular declination and the dynamic perception of egocentric distance.
Gajewski, Daniel A; Philbeck, John W; Wirtz, Philip W; Chichka, David
2014-02-01
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36-220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220-ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15-s preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Twelfth Annual Conference on Manual Control
NASA Technical Reports Server (NTRS)
Wempe, T. E.
1976-01-01
Main topics discussed cover multi-task decision making, attention allocation and workload measurement, displays and controls, nonvisual displays, tracking and other psychomotor tasks, automobile driving, handling qualities and pilot ratings, remote manipulation, system identification, control models, and motion and visual cues. Sixty-five papers are included with presentations on results of analytical studies to develop and evaluate human operator models for a range of control task, vehicle dynamics and display situations; results of tests of physiological control systems and applications to medical problems; and on results of simulator and flight tests to determine display, control and dynamics effects on operator performance and workload for aircraft, automobile, and remote control systems.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk
2015-01-01
Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.
Hollands, Kristen L; Pelton, Trudy A; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M; Wing, Alan M; Tyson, Sarah F; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M
2015-01-01
Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services. Community dwelling stroke survivors with walking speed <0.8m/s, lower limb paresis and no severe visual impairments. Over-ground visual cue training (O-VCT), Treadmill based visual cue training (T-VCT), and Usual care (UC) delivered by physiotherapists twice weekly for 8 weeks. Participants were randomised using computer generated random permutated balanced blocks of randomly varying size. Recruitment, retention, adherence, adverse events and mobility and balance were measured before randomisation, post-intervention and at four weeks follow-up. Fifty-six participants participated (18 T-VCT, 19 O-VCT, 19 UC). Thirty-four completed treatment and follow-up assessments. Of the participants that completed, adherence was good with 16 treatments provided over (median of) 8.4, 7.5 and 9 weeks for T-VCT, O-VCT and UC respectively. No adverse events were reported. Post-treatment improvements in walking speed, symmetry, balance and functional mobility were seen in all treatment arms. Outpatient based treadmill and over-ground walking adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Clinicaltrials.gov NCT01600391.
van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G
2015-11-01
Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).
Heuristics of reasoning and analogy in children's visual perspective taking.
Yaniv, I; Shatz, M
1990-10-01
We propose that children's reasoning about others' visual perspectives is guided by simple heuristics based on a perceiver's line of sight and salient features of the object met by that line. In 3 experiments employing a 2-perceiver analogy task, children aged 3-6 were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight sufficed to distinguish it from alternatives. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed on the objects' sides facilitated solution of the symmetrical orientations. These and several other related findings reported in the literature are traced to children's reliance on heuristics of reasoning.
Inhibition of return shortens perceived duration of a brief visual event.
Osugi, Takayuki; Takeda, Yuji; Murakami, Ikuya
2016-11-01
We investigated the influence of attentional inhibition on the perceived duration of a brief visual event. Although attentional capture by an exogenous cue is known to prolong the perceived duration of an attended visual event, it remains unclear whether time perception is also affected by subsequent attentional inhibition at the location previously cued by an exogenous cue, an attentional phenomenon known as inhibition of return. In this study, we combined spatial cuing and duration judgment. After one second from the appearance of an uninformative peripheral cue either to the left or to the right, a target appeared at a cued side in one-third of the trials, which indeed yielded inhibition of return, and at the opposite side in another one-third of the trials. In the remaining trials, a cue appeared at a central box and one second later, a target appeared at either the left or right side. The target at the previously cued location was perceived to last shorter than the target presented at the opposite location, and shorter than the target presented after the central cue presentation. Therefore, attentional inhibition produced by a classical paradigm of inhibition of return decreased the perceived duration of a brief visual event. Copyright © 2016 Elsevier Ltd. All rights reserved.
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
2016-02-01
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Multimodal cuing of autobiographical memory in semantic dementia.
Greenberg, Daniel L; Ogar, Jennifer M; Viskontas, Indre V; Gorno Tempini, Maria Luisa; Miller, Bruce; Knowlton, Barbara J
2011-01-01
Individuals with semantic dementia (SD) have impaired autobiographical memory (AM), but the extent of the impairment has been controversial. According to one report (Westmacott, Leach, Freedman, & Moscovitch, 2001), patient performance was better when visual cues were used instead of verbal cues; however, the visual cues used in that study (family photographs) provided more retrieval support than do the word cues that are typically used in AM studies. In the present study, we sought to disentangle the effects of retrieval support and cue modality. We cued AMs of 5 patients with SD and 5 controls with words, simple pictures, and odors. Memories were elicited from childhood, early adulthood, and recent adulthood; they were scored for level of detail and episodic specificity. The patients were impaired across all time periods and stimulus modalities. Within the patient group, words and pictures were equally effective as cues (Friedman test; χ² = 0.25, p = .61), whereas odors were less effective than both words and pictures (for words vs. odors, χ² = 7.83, p = .005; for pictures vs. odors, χ² = 6.18, p = .01). There was no evidence of a temporal gradient in either group (for patients with SD, χ² = 0.24, p = .89; for controls, χ² < 2.07, p = .35). Once the effect of retrieval support is equated across stimulus modalities, there is no evidence for an advantage of visual cues over verbal cues. The greater impairment for olfactory cues presumably reflects degeneration of anterior temporal regions that support olfactory memory. (c) 2010 APA, all rights reserved.
Different effects of color-based and location-based selection on visual working memory.
Li, Qi; Saiki, Jun
2015-02-01
In the present study, we investigated how feature- and location-based selection influences visual working memory (VWM) encoding and maintenance. In Experiment 1, cue type (color, location) and cue timing (precue, retro-cue) were manipulated in a change detection task. The stimuli were color-location conjunction objects, and binding memory was tested. We found a significantly greater effect for color precues than for either color retro-cues or location precues, but no difference between location pre- and retro-cues, consistent with previous studies (e.g., Griffin & Nobre in Journal of Cognitive Neuroscience, 15, 1176-1194, 2003). We also found no difference between location and color retro-cues. Experiment 2 replicated the color precue advantage with more complex color-shape-location conjunction objects. Only one retro-cue effect was different from that in Experiment 1: Color retro-cues were significantly less effective than location retro-cues in Experiment 2, which may relate to a structural property of multidimensional VWM representations. In Experiment 3, a visual search task was used, and the result of a greater location than color precue effect suggests that the color precue advantage in a memory task is related to the modulation of VWM encoding rather than of sensation and perception. Experiment 4, using a task that required only memory for individual features but not for feature bindings, further confirmed that the color precue advantage is specific to binding memory. Together, these findings reveal new aspects of the interaction between attention and VWM and provide potentially important implications for the structural properties of VWM representations.
Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco
2013-04-15
King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.
A designated odor-language integration system in the human brain.
Olofsson, Jonas K; Hurley, Robert S; Bowman, Nicholas E; Bao, Xiaojun; Mesulam, M-Marsel; Gottfried, Jay A
2014-11-05
Odors are surprisingly difficult to name, but the mechanism underlying this phenomenon is poorly understood. In experiments using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the physiological basis of odor naming with a paradigm where olfactory and visual object cues were followed by target words that either matched or mismatched the cue. We hypothesized that word processing would not only be affected by its semantic congruency with the preceding cue, but would also depend on the cue modality (olfactory or visual). Performance was slower and less precise when linking a word to its corresponding odor than to its picture. The ERP index of semantic incongruity (N400), reflected in the comparison of nonmatching versus matching target words, was more constrained to posterior electrode sites and lasted longer on odor-cue (vs picture-cue) trials. In parallel, fMRI cross-adaptation in the right orbitofrontal cortex (OFC) and the left anterior temporal lobe (ATL) was observed in response to words when preceded by matching olfactory cues, but not by matching visual cues. Time-series plots demonstrated increased fMRI activity in OFC and ATL at the onset of the odor cue itself, followed by response habituation after processing of a matching (vs nonmatching) target word, suggesting that predictive perceptual representations in these regions are already established before delivery and deliberation of the target word. Together, our findings underscore the modality-specific anatomy and physiology of object identification in the human brain. Copyright © 2014 the authors 0270-6474/14/3414864-10$15.00/0.
"Tunnel Vision": A Possible Keystone Stimulus Control Deficit in Autistic Children.
ERIC Educational Resources Information Center
Rincover, Arnold; And Others
1986-01-01
Three autistic boys (ages 9-13) were trained to select a card containing a stimulus array comprised of three visual cues. Decreased distance between cues resulted in responses to more cues, increased distance to fewer cues. Distances did not affect the responding of children matched for mental and chronological age. (Author/JW)
Late development of cue integration is linked to sensory fusion in cortex.
Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko
2015-11-02
Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex
Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko
2015-01-01
Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841
Milet-Pinheiro, Paulo; Ayasse, Manfred; Dötterl, Stefan
2015-01-01
Oligolectic bees collect pollen from a few plants within a genus or family to rear their offspring, and are known to rely on visual and olfactory floral cues to recognize host plants. However, studies investigating whether oligolectic bees recognize distinct host plants by using shared floral cues are scarce. In the present study, we investigated in a comparative approach the visual and olfactory floral cues of six Campanula species, of which only Campanula lactiflora has never been reported as a pollen source of the oligolectic bee Ch. rapunculi. We hypothesized that the flowers of Campanula species visited by Ch. rapunculi share visual (i.e. color) and/or olfactory cues (scents) that give them a host-specific signature. To test this hypothesis, floral color and scent were studied by spectrophotometric and chemical analyses, respectively. Additionally, we performed bioassays within a flight cage to test the innate color preference of Ch. rapunculi. Our results show that Campanula flowers reflect the light predominantly in the UV-blue/blue bee-color space and that Ch. rapunculi displays a strong innate preference for these two colors. Furthermore, we recorded spiroacetals in the floral scent of all Campanula species, but Ca. lactiflora. Spiroacetals, rarely found as floral scent constituents but quite common among Campanula species, were recently shown to play a key function for host-flower recognition by Ch. rapunculi. We conclude that Campanula species share some visual and olfactory floral cues, and that neurological adaptations (i.e. vision and olfaction) of Ch. rapunculi innately drive their foraging flights toward host flowers. The significance of our findings for the evolution of pollen diet breadth in bees is discussed. PMID:26060994
Multimedia instructions and cognitive load theory: effects of modality and cueing.
Tabbers, Huib K; Martens, Rob L; van Merriënboer, Jeroen J G
2004-03-01
Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. The participants were 111 second-year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). The participants studied a web-based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self-report measures of mental effort were administered. Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non-laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner-paced, as opposed to the system-paced instructions used in earlier research.
Heni, Martin; Kullmann, Stephanie; Ketterer, Caroline; Guthoff, Martina; Bayer, Margarete; Staiger, Harald; Machicao, Fausto; Häring, Hans-Ulrich; Preissl, Hubert; Veit, Ralf; Fritsche, Andreas
2014-03-01
Eating behavior is crucial in the development of obesity and Type 2 diabetes. To further investigate its regulation, we studied the effects of glucose versus water ingestion on the neural processing of visual high and low caloric food cues in 12 lean and 12 overweight subjects by functional magnetic resonance imaging. We found body weight to substantially impact the brain's response to visual food cues after glucose versus water ingestion. Specifically, there was a significant interaction between body weight, condition (water versus glucose), and caloric content of food cues. Although overweight subjects showed a generalized reduced response to food objects in the fusiform gyrus and precuneus, the lean group showed a differential pattern to high versus low caloric foods depending on glucose versus water ingestion. Furthermore, we observed plasma insulin and glucose associated effects. The hypothalamic response to high caloric food cues negatively correlated with changes in blood glucose 30 min after glucose ingestion, while especially brain regions in the prefrontal cortex showed a significant negative relationship with increases in plasma insulin 120 min after glucose ingestion. We conclude that the postprandial neural processing of food cues is highly influenced by body weight especially in visual areas, potentially altering visual attention to food. Furthermore, our results underline that insulin markedly influences prefrontal activity to high caloric food cues after a meal, indicating that postprandial hormones may be potential players in modulating executive control. Copyright © 2013 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.
2010-01-01
Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…
Wingbeat frequency-sweep and visual stimuli for trapping male Aedes aegypti (Diptera: Culicidae)
USDA-ARS?s Scientific Manuscript database
Combinations of female wingbeat acoustic cues and visual cues were evaluated to determine their potential for use in male Aedes aegypti (L.) traps in peridomestic environments. A modified Centers for Disease control (CDC) light trap using a 350-500 Hz frequency-sweep broadcast from a speaker as an a...
USDA-ARS?s Scientific Manuscript database
This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...
Visual Cues Generated during Action Facilitate 14-Month-Old Infants' Mental Rotation
ERIC Educational Resources Information Center
Antrilli, Nick K.; Wang, Su-hua
2016-01-01
Although action experience has been shown to enhance the development of spatial cognition, the mechanism underlying the effects of action is still unclear. The present research examined the role of visual cues generated during action in promoting infants' mental rotation. We sought to clarify the underlying mechanism by decoupling different…
Categorically Defined Targets Trigger Spatiotemporal Visual Attention
ERIC Educational Resources Information Center
Wyble, Brad; Bowman, Howard; Potter, Mary C.
2009-01-01
Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…
Visual Cues and Listening Effort: Individual Variability
ERIC Educational Resources Information Center
Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.
2011-01-01
Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…
Visual Sonority Modulates Infants' Attraction to Sign Language
ERIC Educational Resources Information Center
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
2018-01-01
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Impaired Visual Attention in Children with Dyslexia.
ERIC Educational Resources Information Center
Heiervang, Einar; Hugdahl, Kenneth
2003-01-01
A cue-target visual attention task was administered to 25 children (ages 10-12) with dyslexia. Results showed a general pattern of slower responses in the children with dyslexia compared to controls. Subjects also had longer reaction times in the short and long cue-target interval conditions (covert and overt shift of attention). (Contains…
Visual Cues, Student Sex, Material Taught, and the Magnitude of Teacher Expectancy Effects.
ERIC Educational Resources Information Center
Badini, Aldo A.; Rosenthal, Robert
1989-01-01
Conducts an experiment on teacher expectancy effects to investigate the simultaneous effects of student gender, communication channel, and type of material taught (vocabulary and reasoning). Finds that the magnitude of teacher expectation effects was greater when students had access to visual cues, especially when the students were female. (MS)
Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search
ERIC Educational Resources Information Center
Geringswald, Franziska; Pollmann, Stefan
2015-01-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…
Speaker Identity Supports Phonetic Category Learning
ERIC Educational Resources Information Center
Mani, Nivedita; Schneider, Signe
2013-01-01
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…
Audio-Visual Speech Perception: A Developmental ERP Investigation
ERIC Educational Resources Information Center
Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…
Enhancing Visual Search Abilities of People with Intellectual Disabilities
ERIC Educational Resources Information Center
Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…
Heightened attentional capture by visual food stimuli in anorexia nervosa.
Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J
2017-08-01
The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya
2012-03-01
The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.
Subjective scaling of spatial room acoustic parameters influenced by visual environmental cues
Valente, Daniel L.; Braasch, Jonas
2010-01-01
Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music∕speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D∕R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment. PMID:20968367
Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth
2014-01-01
The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends.
Visually-guided attention enhances target identification in a complex auditory scene.
Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G
2007-06-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.
Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene
Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.
2007-01-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308
Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-09-20
Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.
Sidarus, Nura; Vuorre, Matti; Metcalfe, Janet; Haggard, Patrick
2017-01-01
How do we know how much control we have over our environment? The sense of agency refers to the feeling that we are in control of our actions, and that, through them, we can control our external environment. Thus, agency clearly involves matching intentions, actions, and outcomes. The present studies investigated the possibility that processes of action selection, i.e., choosing what action to make, contribute to the sense of agency. Since selection of action necessarily precedes execution of action, such effects must be prospective. In contrast, most literature on sense of agency has focussed on the retrospective computation whether an outcome fits the action performed or intended. This hypothesis was tested in an ecologically rich, dynamic task based on a computer game. Across three experiments, we manipulated three different aspects of action selection processing: visual processing fluency, categorization ambiguity, and response conflict. Additionally, we measured the relative contributions of prospective, action selection-based cues, and retrospective, outcome-based cues to the sense of agency. Manipulations of action selection were orthogonally combined with discrepancy of visual feedback of action. Fluency of action selection had a small but reliable effect on the sense of agency. Additionally, as expected, sense of agency was strongly reduced when visual feedback was discrepant with the action performed. The effects of discrepant feedback were larger than the effects of action selection fluency, and sometimes suppressed them. The sense of agency is highly sensitive to disruptions of action-outcome relations. However, when motor control is successful, and action-outcome relations are as predicted, fluency or dysfluency of action selection provides an important prospective cue to the sense of agency.
Sidarus, Nura; Vuorre, Matti; Metcalfe, Janet; Haggard, Patrick
2017-01-01
How do we know how much control we have over our environment? The sense of agency refers to the feeling that we are in control of our actions, and that, through them, we can control our external environment. Thus, agency clearly involves matching intentions, actions, and outcomes. The present studies investigated the possibility that processes of action selection, i.e., choosing what action to make, contribute to the sense of agency. Since selection of action necessarily precedes execution of action, such effects must be prospective. In contrast, most literature on sense of agency has focussed on the retrospective computation whether an outcome fits the action performed or intended. This hypothesis was tested in an ecologically rich, dynamic task based on a computer game. Across three experiments, we manipulated three different aspects of action selection processing: visual processing fluency, categorization ambiguity, and response conflict. Additionally, we measured the relative contributions of prospective, action selection-based cues, and retrospective, outcome-based cues to the sense of agency. Manipulations of action selection were orthogonally combined with discrepancy of visual feedback of action. Fluency of action selection had a small but reliable effect on the sense of agency. Additionally, as expected, sense of agency was strongly reduced when visual feedback was discrepant with the action performed. The effects of discrepant feedback were larger than the effects of action selection fluency, and sometimes suppressed them. The sense of agency is highly sensitive to disruptions of action-outcome relations. However, when motor control is successful, and action-outcome relations are as predicted, fluency or dysfluency of action selection provides an important prospective cue to the sense of agency. PMID:28450839
Interaction of color and geometric cues in depth perception: when does "red" mean "near"?
Guibal, Christophe R C; Dresp, Birgitta
2004-12-01
Luminance and color are strong and self-sufficient cues to pictorial depth in visual scenes and images. The present study investigates the conditions under which luminance or color either strengthens or overrides geometric depth cues. We investigated how luminance contrast associated with the color red and color contrast interact with relative height in the visual field, partial occlusion, and interposition to determine the probability that a given figure presented in a pair is perceived as "nearer" than the other. Latencies of "near" responses were analyzed to test for effects of attentional selection. Figures in a pair were supported by luminance contrast (Experiment 1) or isoluminant color contrast (Experiment 2) and combined with one of the three geometric cues. The results of Experiment 1 show that the luminance contrast of a color (here red), when it does not interact with other colors, produces the same effects as achromatic luminance contrasts. The probability of "near" increases with the luminance contrast of the color stimulus, the latencies for "near" responses decrease with increasing luminance contrast. Partial occlusion is found to be a strong enough pictorial cue to support a weaker red luminance contrast. Interposition cues lose out against cues of spatial position and partial occlusion. The results of Experiment 2, with isoluminant displays of varying color contrast, reveal that red color contrast on a light background supported by any of the three geometric cues wins over green or white supported by any of the three geometric cues. On a dark background, red color contrast supported by the interposition cue loses out against green or white color contrast supported by partial occlusion. These findings reveal that color is not an independent depth cue, but is strongly influenced by luminance contrast and stimulus geometry. Systematically shorter response latencies for stronger "near" percepts demonstrate that selective visual attention reliably detects the most likely depth cue combination in a given configuration.
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.
Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao
2016-12-01
In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.
Gestalten of today: early processing of visual contours and surfaces.
Kovács, I
1996-12-01
While much is known about the specialized, parallel processing streams of low-level vision that extract primary visual cues, there is only limited knowledge about the dynamic interactions between them. How are the fragments, caught by local analyzers, assembled together to provide us with a unified percept? How are local discontinuities in texture, motion or depth evaluated with respect to object boundaries and surface properties? These questions are presented within the framework of orientation-specific spatial interactions of early vision. Key observations of psychophysics, anatomy and neurophysiology on interactions of various spatial and temporal ranges are reviewed. Aspects of the functional architecture and possible neural substrates of local orientation-specific interactions are discussed, underlining their role in the integration of information across the visual field, and particularly in contour integration. Examples are provided demonstrating that global context, such as contour closure and figure-ground assignment, affects these local interactions. It is illustrated that figure-ground assignment is realized early in visual processing, and that the pattern of early interactions also brings about an effective and sparse coding of visual shape. Finally, it is concluded that the underlying functional architecture is not only dynamic and context dependent, but the pattern of connectivity depends as much on past experience as on actual stimulation.
Camouflage, communication and thermoregulation: lessons from colour changing organisms.
Stuart-Fox, Devi; Moussalli, Adnan
2009-02-27
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation.
Camouflage, communication and thermoregulation: lessons from colour changing organisms
Stuart-Fox, Devi; Moussalli, Adnan
2008-01-01
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation. PMID:19000973
Decentralized Multisensory Information Integration in Neural Systems.
Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si
2016-01-13
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.
Decentralized Multisensory Information Integration in Neural Systems
Zhang, Wen-hao; Chen, Aihua
2016-01-01
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843
Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality
NASA Astrophysics Data System (ADS)
Hua, Hong
2017-05-01
Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).
Brisk heart rate and EEG changes during execution and withholding of cue-paced foot motor imagery
Pfurtscheller, Gert; Solis-Escalante, Teodoro; Barry, Robert J.; Klobassa, Daniela S.; Neuper, Christa; Müller-Putz, Gernot R.
2013-01-01
Cue-paced motor imagery (MI) is a frequently used mental strategy to realize a Brain-Computer Interface (BCI). Recently it has been reported that two MI tasks can be separated with a high accuracy within the first second after cue presentation onset. To investigate this phenomenon in detail we studied the dynamics of motor cortex beta oscillations in EEG and the changes in heart rate (HR) during visual cue-paced foot MI using a go (execution of imagery) vs. nogo (withholding of imagery) paradigm in 16 healthy subjects. Both execution and withholding of MI resulted in a brisk centrally localized beta event-related desynchronization (ERD) with a maximum at ~400 ms and a concomitant HR deceleration. We found that response patterns within the first second after stimulation differed between conditions. The ERD was significantly larger in go as compared to nogo. In contrast the HR deceleration was somewhat smaller and followed by an acceleration in go as compared to nogo. These findings suggest that the early beta ERD reflects visually induced preparatory activity in motor cortex networks. Both the early beta ERD and the HR deceleration are the result of automatic operating processes that are likely part of the orienting reflex (OR). Of interest, however, is that the preparatory cortical activity is strengthened and the HR modulated already within the first second after stimulation during the execution of MI. The subtraction of the HR time course of the nogo from the go condition revealed a slight HR acceleration in the first seconds most likely due to the increased mental effort associated with the imagery process. PMID:23908614
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Do preschool children learn to read words from environmental prints?
Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su
2014-01-01
Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.
Do Preschool Children Learn to Read Words from Environmental Prints?
Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su
2014-01-01
Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677
Davidson, Melanie M; Butler, Ruth C; Teulon, David A J
2006-07-01
The effects of starvation or age on the walking or flying response of female Frankliniella occidentalis to visual and/or odor cues in two types of olfactometer were examined in the laboratory. The response of walking thrips starved for 0, 1, 4, or 24h to an odor cue (1microl of 10% p-anisaldehyde) was examined in a Y-tube olfactometer. The take-off and landing response of thrips (unknown age) starved for 0, 1, 4, 24, 48 or 72h, or of thrips of different ages (2-3 days or 10-13 days post-adult emergence) starved for 24h, to a visual cue (98 cm(2) yellow sticky trap) and/or an odor cue (0.5 or 1.0 ml p-anisaldehyde) was examined in a wind tunnel. More thrips walked up the odor-laden arm in the Y-tube when starved for at least 4h (76%) than satiated thrips (58.7%) or those starved for 1h (62.7%, P<0.05). In the wind tunnel experiments the percentage of thrips to fly or land on the sticky trap increased between satiated thrips (7.3% to fly, 3.3% on trap) and those starved for 4h (81.2% to fly, 29% on trap) and decreased between thrips starved for 48 (74.5% to fly, 23% on trap) and 72 h (56.5% to fly, 15.5% on trap, P<0.05). Fewer younger thrips (38.8%) landed on a sticky trap containing a yellow visual cue of, those that flew, than older thrips (70.4%, P<0.05), although a similar percentage of thrips flew regardless of age or type of cue present in the wind tunnel (average 44%, P>0.05).
Ridley-Siegert, Thomas L; Crombag, Hans S; Yeomans, Martin R
2015-12-01
There is a wealth of data showing a large impact of food cues on human ingestion, yet most studies use pictures of food where the precise nature of the associations between the cue and food is unclear. To test whether novel cues which were associated with the opportunity of winning access to food images could also impact ingestion, 63 participants participated in a game in which novel visual cues signalled whether responding on a keyboard would win (a picture of) chocolate, crisps, or nothing. Thirty minutes later, participants were given an ad libitum snack-intake test during which the chocolate-paired cue, the crisp-paired cue, the non-winning cue and no cue were presented as labels on the food containers. The presence of these cues significantly altered overall intake of the snack foods; participants presented with food labelled with the cue that had been associated with winning chocolate ate significantly more than participants who had been given the same products labelled with the cue associated with winning nothing, and in the presence of the cue signalling the absence of food reward participants tended to eat less than all other conditions. Surprisingly, cue-dependent changes in food consumption were unaffected by participants' level of contingency awareness. These results suggest that visual cues that have been pre-associated with winning, but not consuming, a liked food reward modify food intake consistent with current ideas that the abundance of food associated cues may be one factor underlying the 'obesogenic environment'. Copyright © 2015 Elsevier Inc. All rights reserved.
Dynamic modulation of visual and electrosensory gains for locomotor control
Sutton, Erin E.; Demir, Alican; Stamper, Sarah A.; Fortune, Eric S.; Cowan, Noah J.
2016-01-01
Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens, relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish (n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion. PMID:27170650
Siemann, Julia; Herrmann, Manfred; Galashan, Daniela
2018-01-25
The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).
Eckstein, Miguel P; Mack, Stephen C; Liston, Dorion B; Bogush, Lisa; Menzel, Randolf; Krauzlis, Richard J
2013-06-07
Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. Copyright © 2013 Elsevier Ltd. All rights reserved.
An Investigation of Visual, Aural, Motion and Control Movement Cues.
ERIC Educational Resources Information Center
Matheny, W. G.; And Others
A study was conducted to determine the ways in which multi-sensory cues can be simulated and effectively used in the training of pilots. Two analytical bases, one called the stimulus environment approach and the other an information array approach, are developed along with a cue taxonomy. Cues are postulated on the basis of information gained from…
Horwood, Anna M; Riddell, Patricia M
2009-01-01
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Binocular disparity, blur and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3m and 2m. By separating the three main near cues we can explore their relative weighting in three, two, one and zero cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable inter-participant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development and emmetropisation. PMID:19301186
NASA Technical Reports Server (NTRS)
Young, L. R.; Oman, C. M.; Curry, R. E.
1977-01-01
Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.