Sample records for static visual cues

  1. Enhancing Learning from Dynamic and Static Visualizations by Means of Cueing

    ERIC Educational Resources Information Center

    Kuhl, Tim; Scheiter, Katharina; Gerjets, Peter

    2012-01-01

    The current study investigated whether learning from dynamic and two presentation formats for static visualizations can be enhanced by means of cueing. One hundred and fifty university students were randomly assigned to six conditions, resulting from a 2x3-design, with cueing (with/without) and type of visualization (dynamic, static-sequential,…

  2. Perceptual upright: the relative effectiveness of dynamic and static images under different gravity States.

    PubMed

    Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R

    2011-01-01

    The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.

  3. Evidence for impairments in using static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism.

    PubMed

    Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B

    2008-09-01

    We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.

  4. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  5. The Effects of Attention Cueing on Visualizers' Multimedia Learning

    ERIC Educational Resources Information Center

    Yang, Hui-Yu

    2016-01-01

    The present study examines how various types of attention cueing and cognitive preference affect learners' comprehension of a cardiovascular system and cognitive load. EFL learners were randomly assigned to one of four conditions: non-signal, static-blood-signal, static-blood-static-arrow-signal, and animation-signal. The results indicated that…

  6. Enhancing Interactive Tutorial Effectiveness through Visual Cueing

    ERIC Educational Resources Information Center

    Jamet, Eric; Fernandez, Jonathan

    2016-01-01

    The present study investigated whether learning how to use a web service with an interactive tutorial can be enhanced by cueing. We expected the attentional guidance provided by visual cues to facilitate the selection of information in static screen displays that corresponded to spoken explanations. Unlike most previous studies in this area, we…

  7. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Static and Motion-Based Visual Features Used by Airport Tower Controllers: Some Implications for the Design of Remote or Virtual Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion B.

    2011-01-01

    Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.

  9. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  10. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  11. Degradation of learned skills. Static practice effectiveness for visual approach and landing skill retention

    NASA Technical Reports Server (NTRS)

    Sitterley, T. E.

    1974-01-01

    The effectivess of an improved static retraining method was evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Experienced pilots were trained and then tested after 4 months without flying to compare their performance using the improved method with three methods previously evaluated. Use of the improved static retraining method resulted in no practical or significant skill degradation and was found to be even more effective than methods using a dynamic presentation of visual cues. The results suggested that properly structured open loop methods of flight control task retraining are feasible.

  12. Visual Displays and Contextual Presentations in Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Park, Ok-choon

    1998-01-01

    Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…

  13. Gender recognition depends on type of movement and motor skill. Analyzing and perceiving biological motion in musical and nonmusical tasks.

    PubMed

    Wöllner, Clemens; Deconinck, Frederik J A

    2013-05-01

    Gender recognition in point-light displays was investigated with regard to body morphology cues and motion cues of human motion performed with different levels of technical skill. Gestures of male and female orchestral conductors were recorded with a motion capture system while they conducted excerpts from a Mendelssohn string symphony to musicians. Point-light displays of conductors were presented to observers under the following conditions: visual-only, auditory-only, audiovisual, and two non-conducting conditions (walking and static images). Observers distinguished between male and female conductors in gait and static images, but not in visual-only and auditory-only conducting conditions. Across all conductors, gender recognition for audiovisual stimuli was better than chance, yet significantly less reliable than for gait. Separate analyses for two groups of conductors indicated an expertise effect in that novice conductors' gender was perceived above chance level for visual-only and audiovisual conducting, while skilled conducting gestures of experts did not afford gender-specific cues. In these conditions, participants may have ignored the body morphology cues that led to correct judgments for static images. Results point to a response bias such that conductors were more often judged to be male. Thus judgment accuracy depended both on the conductors' level of expertise as well as on the observers' concepts, suggesting that perceivable differences between men and women may diminish for highly trained movements of experienced individuals. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. The effects of 3D interactive animated graphics on student learning and attitudes in computer-based instruction

    NASA Astrophysics Data System (ADS)

    Moon, Hye Sun

    Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition exhibited more positive attitudes toward instruction than those in other treatment conditions (2D static, 2D animated, and 3D static conditions). No group differences were found in the posttest scores among four treatment conditions. However, students in the 3D animated condition took less time for information retrieval on posttest than those in other treatment conditions.

  15. The internal representation of head orientation differs for conscious perception and balance control

    PubMed Central

    Dalton, Brian H.; Rasman, Brandon G.; Inglis, J. Timothy

    2017-01-01

    Key points We tested perceived head‐on‐feet orientation and the direction of vestibular‐evoked balance responses in passively and actively held head‐turned postures.The direction of vestibular‐evoked balance responses was not aligned with perceived head‐on‐feet orientation while maintaining prolonged passively held head‐turned postures. Furthermore, static visual cues of head‐on‐feet orientation did not update the estimate of head posture for the balance controller.A prolonged actively held head‐turned posture did not elicit a rotation in the direction of the vestibular‐evoked balance response despite a significant rotation in perceived angular head posture.It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Abstract Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head‐on‐feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head‐turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole‐body balance responses. Visual recalibration of head‐on‐feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular‐evoked balance response was not orthogonal to perceived head‐on‐feet orientation, regardless of the visual information provided. For prolonged head‐turned postures, balance responses consistent with actual head‐on‐feet posture occurred only during the active condition. Our results indicate that conscious perception of head‐on‐feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head‐on‐feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head‐on‐feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. PMID:28035656

  16. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    PubMed

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  17. Degradation of learned skills: Effectiveness of practice methods on visual approach and landing skill retention

    NASA Technical Reports Server (NTRS)

    Sitterley, T. E.; Zaitzeff, L. P.; Berge, W. A.

    1972-01-01

    Flight control and procedural task skill degradation, and the effectiveness of retraining methods were evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Fifteen experienced pilots were trained and then tested after 4 months either without the benefits of practice or with static rehearsal, dynamic rehearsal or with dynamic warmup practice. Performance on both the flight control and procedure tasks degraded significantly after 4 months. The rehearsal methods effectively countered procedure task skill degradation, while dynamic rehearsal or a combination of static rehearsal and dynamic warmup practice was required for the flight control tasks. The quality of the retraining methods appeared to be primarily dependent on the efficiency of visual cue reinforcement.

  18. Differential approach to strategies of segmental stabilisation in postural control.

    PubMed

    Isableu, Brice; Ohlmann, Théophile; Crémieux, Jacques; Amblard, Bernard

    2003-05-01

    The present paper attempts to clarify the between-subjects variability exhibited in both segmental stabilisation strategies and their subordinated or associated sensory contribution. Previous data have emphasised close relationships between the interindividual variability in both the visual control of posture and the spatial visual perception. In this study, we focused on the possible relationships that might link perceptual visual field dependence-independence and the visual contribution to segmental stabilisation strategies. Visual field dependent (FD) and field independent (FI) subjects were selected on the basis of their extreme score in a static rod and frame test where an estimation of the subjective vertical was required. In the postural test, the subjects stood in the sharpened Romberg position in darkness or under normal or stroboscopic illumination, in front of either a vertical or a tilted frame. Strategies of segmental stabilisation of the head, shoulders and hip in the roll plane were analysed by means of their anchoring index (AI). Our hypothesis was that FD subjects might use mainly visual cues for calibrating not only their spatial perception but also their strategies of segmental stabilisation. In the case of visual cue disturbances, a greater visual dependency to the strategies of segmental stabilisation in FD subjects should be validated by observing more systematic "en bloc" functioning (i.e. negative AI) between two adjacent segments. The main results are the following: 1. Strategies of segmental stabilisation differed between both groups and differences were amplified with the deprivation of either total vision and/or static visual cues. 2. In the absence of total vision and/or static visual cues, FD subjects have shown an increased efficiency of the hip stabilisation in space strategy and an "en bloc" operation of the shoulder-hip unit (whole trunk). The last "en bloc" operation was extended to the whole head-trunk unit in darkness, associated with a hip stabilisation in space. 3. The FI subjects have adopted neither a strategy of segmental stabilisation in space nor on the underlying segment, whatever the body segment considered and the visual condition. Thus, in this group, head, shoulder and hip moved independently from each other during stance control, roughly without taking into account the visual condition. The results, emphasising a differential weighting of sensory input involved in both perceptual and postural control, are discussed in terms of the differential choice and/or ability to select the adequate frame of reference common to both cognitive and motor spatial activities. We assumed that a motor-somesthetics "neglect" or a lack of mastering of these inputs/outputs rather than a mere visual dependence in FD subjects would generate these interindividual differences in both spatial perception and postural balance. This proprioceptive "neglect" is assumed to lead FD subjects to sensory reweighting, whereas proprioceptive dominance would lead FI subjects to a greater ability in selecting the adequate frame of reference in the case of intersensory disturbances. Finally, this study also provides evidence for a new interpretation of the visual field dependence-independence dimension in both spatial perception and postural control.

  19. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  20. Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields.

    PubMed

    Stevens, Andrew H; Butkiewicz, Thomas; Ware, Colin

    2017-01-01

    Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.

  1. The internal representation of head orientation differs for conscious perception and balance control.

    PubMed

    Dalton, Brian H; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien

    2017-04-15

    We tested perceived head-on-feet orientation and the direction of vestibular-evoked balance responses in passively and actively held head-turned postures. The direction of vestibular-evoked balance responses was not aligned with perceived head-on-feet orientation while maintaining prolonged passively held head-turned postures. Furthermore, static visual cues of head-on-feet orientation did not update the estimate of head posture for the balance controller. A prolonged actively held head-turned posture did not elicit a rotation in the direction of the vestibular-evoked balance response despite a significant rotation in perceived angular head posture. It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head-on-feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head-turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole-body balance responses. Visual recalibration of head-on-feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular-evoked balance response was not orthogonal to perceived head-on-feet orientation, regardless of the visual information provided. For prolonged head-turned postures, balance responses consistent with actual head-on-feet posture occurred only during the active condition. Our results indicate that conscious perception of head-on-feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head-on-feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head-on-feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  2. Transient cardio-respiratory responses to visually induced tilt illusions

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.

    2000-01-01

    Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.

  3. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  4. Designing Computer-Based Learning Contents: Influence of Digital Zoom on Attention

    ERIC Educational Resources Information Center

    Glaser, Manuela; Lengyel, Dominik; Toulouse, Catherine; Schwan, Stephan

    2017-01-01

    In the present study, we investigated the role of digital zoom as a tool for directing attention while looking at visual learning material. In particular, we analyzed whether minimal digital zoom functions similarly to a rhetorical device by cueing mental zooming of attention accordingly. Participants were presented either static film clips, film…

  5. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area.

    PubMed

    Fetsch, Christopher R; Wang, Sentao; Gu, Yong; Deangelis, Gregory C; Angelaki, Dora E

    2007-01-17

    Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.

  6. Illusory Distance Modulates Perceived Size of Afterimage despite the Disappearance of Depth Cues

    PubMed Central

    Liu, Shengxi; Lei, Quan

    2016-01-01

    It is known that the perceived size of an afterimage is modulated by the perceived distance between the observer and the depth plane on which the afterimage is projected (Emmert’s law). Illusions like Ponzo demonstrate that illusory distance induced by depth cues can also affect the perceived size of an object. In this study, we report that the illusory distance not only modulates the perceived size of object’s afterimage during the presence of the depth cues, but the modulation persists after the disappearance of the depth cues. We used an adapted version of the classic Ponzo illusion. Illusory depth perception was induced by linear perspective cues with two tilted lines converging at the upper boundary of the display. Two horizontal bars were placed between the two lines, resulting in a percept of the upper bar to be farther away than the lower bar. Observers were instructed to make judgment about the relative size of the afterimage of the lower and the upper bars after adaptation. When the perspective cues and the bars were static, the illusory effect of the Ponzo afterimage is consistent with that of the traditional size-distance illusion. When the perspective cues were flickering and the bars were static, only the afterimage of the latter was perceived, yet still a considerable amount of the illusory effect was perceived. The results could not be explained by memory of a prejudgment of the bar length during the adaptation phase. The findings suggest that cooccurrences of depth cues and object may link a depth marker for the object, so that the perceived size of the object or its afterimage is modulated by feedback of depth information from higher-level visual cortex even when there is no depth cues directly available on the retinal level. PMID:27391335

  7. Visual processing of moving and static self body-parts.

    PubMed

    Frassinetti, Francesca; Pavani, Francesco; Zamagni, Elisa; Fusaroli, Giulia; Vescovi, Massimo; Benassi, Mariagrazia; Avanzi, Stefano; Farnè, Alessandro

    2009-07-01

    Humans' ability to recognize static images of self body-parts can be lost following a lesion of the right hemisphere [Frassinetti, F., Maini, M., Romualdi, S., Galante, E., & Avanzi, S. (2008). Is it mine? Hemispheric asymmetries in corporeal self-recognition. Journal of Cognitive Neuroscience, 20, 1507-1516]. Here we investigated whether the visual information provided by the movement of self body-parts may be separately processed by right brain-damaged (RBD) patients and constitute a valuable cue to reduce their deficit in self body-parts processing. To pursue these aims, neurological healthy subjects and RBD patients were submitted to a matching-task of a pair of subsequent visual stimuli, in two conditions. In the dynamic condition, participants were shown movies of moving body-parts (hand, foot, arm and leg); in the static condition, participants were shown still images of the same body-parts. In each condition, on half of the trials at least one stimulus in the pair was from the participant's own body ('Self' condition), whereas on the remaining half of the trials both stimuli were from another person ('Other' condition). Results showed that in healthy participants the self-advantage was present when processing both static and dynamic body-parts, but it was more important in the latter condition. In RBD patients, however, the self-advantage was absent in the static, but present in the dynamic body-parts condition. These findings suggest that visual information from self body-parts in motion may be processed independently in patients with impaired static self-processing, thus pointing to a modular organization of the mechanisms responsible for the self/other distinction.

  8. Static and Dynamic Facial Cues Differentially Affect the Consistency of Social Evaluations.

    PubMed

    Hehman, Eric; Flake, Jessica K; Freeman, Jonathan B

    2015-08-01

    Individuals are quite sensitive to others' appearance cues when forming social evaluations. Cues such as facial emotional resemblance are based on facial musculature and thus dynamic. Cues such as a face's structure are based on the underlying bone and are thus relatively static. The current research examines the distinction between these types of facial cues by investigating the consistency in social evaluations arising from dynamic versus static cues. Specifically, across four studies using real faces, digitally generated faces, and downstream behavioral decisions, we demonstrate that social evaluations based on dynamic cues, such as intentions, have greater variability across multiple presentations of the same identity than do social evaluations based on static cues, such as ability. Thus, although evaluations of intentions vary considerably across different instances of a target's face, evaluations of ability are relatively fixed. The findings highlight the role of facial cues' consistency in the stability of social evaluations. © 2015 by the Society for Personality and Social Psychology, Inc.

  9. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  10. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  11. The effect of animation on learning action symbols by individuals with intellectual disabilities.

    PubMed

    Fujisawa, Kazuko; Inoue, Tomoyoshi; Yamana, Yuko; Hayashi, Humirhiro

    2011-03-01

    The purpose of the present study was to investigate whether participants with intellectual impairments could benefit from the movement associated with animated pictures while they were learning symbol names. Sixteen school students, whose linguistic-developmental age ranged from 38?91 months, participated in the experiment. They were taught 16 static visual symbols and the corresponding action words (naming task) in two sessions conducted one week apart. In the experimental condition, animation was employed to facilitate comprehension, whereas no animation was used in the control condition. Enhancement of learning was shown in the experimental condition, suggesting that the participants benefited from animated symbols. Furthermore, it was found that the lower the linguistic developmental age, the more effective the animated cue was in learning static visual symbols.

  12. Visually-induced reorientation illusions as a function of age.

    PubMed

    Howard, I P; Jenkin, H L; Hu, G

    2000-09-01

    We reported previously that supine subjects inside a furnished room who are tilted 90 degrees may experience themselves and the room as upright to gravity. We call this the levitation illusion because it creates sensations similar to those experienced in weightlessness. It is an example of a larger class of novel static reorientation illusions that we have explored. Stationary subjects inside a furnished room rotating about a horizontal axis experience complete self rotation about the roll or pitch axis. We call this a dynamic reorientation illusion. We have determined the incidence of static and dynamic reorientation illusions in subjects ranging in age from 9 to 78 yr. Some 90% of subjects of all ages experienced the dynamic reorientation illusion but the percentage of subjects experiencing static reorientation illusions increased with age. We propose that the dynamic illusion depends on a primitive mechanism of visual-vestibular interaction but that static reorientation illusions depend on learned visual cues to the vertical arising from the perceived tops and bottoms of familiar objects and spatial relationships between objects. Older people become more dependent on visual polarity to compensate for loss in vestibular sensitivity. Of 9 astronauts, 4 experienced the levitation illusion. The relationship between susceptibility to reorientation illusions on Earth and in space has still to be determined. We propose that the Space Station will be less disorienting if pictures of familiar objects line the walls.

  13. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  14. Motion parallax in immersive cylindrical display systems

    NASA Astrophysics Data System (ADS)

    Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.

    2012-03-01

    Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.

  15. Perception of self-tilt in a true and illusory vertical plane

    NASA Technical Reports Server (NTRS)

    Groen, Eric L.; Jenkin, Heather L.; Howard, Ian P.; Oman, C. M. (Principal Investigator)

    2002-01-01

    A tilted furnished room can induce strong visual reorientation illusions in stationary subjects. Supine subjects may perceive themselves upright when the room is tilted 90 degrees so that the visual polarity axis is kept aligned with the subject. This 'upright illusion' was used to induce roll tilt in a truly horizontal, but perceptually vertical, plane. A semistatic tilt profile was applied, in which the tilt angle gradually changed from 0 degrees to 90 degrees, and vice versa. This method produced larger illusory self-tilt than usually found with static tilt of a visual scene. Ten subjects indicated self-tilt by setting a tactile rod to perceived vertical. Six of them experienced the upright illusion and indicated illusory self-tilt with an average gain of about 0.5. This value is smaller than with true self-tilt (0.8), but comparable to the gain of visually induced self-tilt in erect subjects. Apparently, the contribution of nonvisual cues to gravity was independent of the subject's orientation to gravity itself. It therefore seems that the gain of visually induced self-tilt is smaller because of lacking, rather than conflicting, nonvisual cues. A vector analysis is used to discuss the results in terms of relative sensory weightings.

  16. Differences in Visual-Spatial Input May Underlie Different Compression Properties of Firing Fields for Grid Cell Modules in Medial Entorhinal Cortex

    PubMed Central

    Raudies, Florian; Hasselmo, Michael E.

    2015-01-01

    Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432

  17. Roll tracking effects of G-vector tilt and various types of motion washout

    NASA Technical Reports Server (NTRS)

    Jex, H. R.; Magdaleno, R. E.; Junker, A. M.

    1978-01-01

    In a dogfight scenario, the task was to follow the target's roll angle while suppressing gust disturbances. All subjects adopted the same behavioral strategies in following the target while suppressing the gusts, and the MFP-fitted math model response was generally within one data symbol width. The results include the following: (1) comparisons of full roll motion (both with and without the spurious gravity tilt cue) with the static case. These motion cues help suppress disturbances with little net effect on the visual performance. Tilt cues were clearly used by the pilots but gave only small improvement in tracking errors. (2) The optimum washout (in terms of performance close to real world, similar behavioral parameters, significant motion attenuation (60 percent), and acceptable motion fidelity) was the combined attenuation and first-order washout. (3) Various trends in parameters across the motion conditions were apparent, and are discussed with respect to a comprehensive model for predicting adaptation to various roll motion cues.

  18. Neural Mechanism for Mirrored Self-face Recognition.

    PubMed

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.

  19. Neural Mechanism for Mirrored Self-face Recognition

    PubMed Central

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-01-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712

  20. Sensitivity to differences in the motor origin of drawings: from human to robot.

    PubMed

    De Preester, Helena; Tsakiris, Manos

    2014-01-01

    This study explores the idea that an observer is sensitive to differences in the static traces of drawings that are due to differences in motor origin. In particular, our aim was to test if an observer is able to discriminate between drawings made by a robot and by a human in the case where the drawings contain salient kinematic cues for discrimination and in the case where the drawings only contain more subtle kinematic cues. We hypothesized that participants would be able to correctly attribute the drawing to a human or a robot origin when salient kinematic cues are present. In addition, our study shows that observers are also able to detect the producer behind the drawings in the absence of these salient kinematic cues. The design was such that in the absence of salient kinematic cues, the drawings are visually very similar, i.e. only differing in subtle kinematic differences. Observers thus had to rely on these subtle kinematic differences in the line trajectories between drawings. However, not only motor origin (human versus robot) but also motor style (natural versus mechanic) plays a role in attributing a drawing to the correct producer, because participants scored less high when the human hand draws in a relatively mechanical way. Overall, this study suggests that observers are sensitive to subtle kinematic differences between visually similar marks in drawings that have a different motor origin. We offer some possible interpretations inspired by the idea of "motor resonance".

  1. Sensitivity to Differences in the Motor Origin of Drawings: From Human to Robot

    PubMed Central

    De Preester, Helena; Tsakiris, Manos

    2014-01-01

    This study explores the idea that an observer is sensitive to differences in the static traces of drawings that are due to differences in motor origin. In particular, our aim was to test if an observer is able to discriminate between drawings made by a robot and by a human in the case where the drawings contain salient kinematic cues for discrimination and in the case where the drawings only contain more subtle kinematic cues. We hypothesized that participants would be able to correctly attribute the drawing to a human or a robot origin when salient kinematic cues are present. In addition, our study shows that observers are also able to detect the producer behind the drawings in the absence of these salient kinematic cues. The design was such that in the absence of salient kinematic cues, the drawings are visually very similar, i.e. only differing in subtle kinematic differences. Observers thus had to rely on these subtle kinematic differences in the line trajectories between drawings. However, not only motor origin (human versus robot) but also motor style (natural versus mechanic) plays a role in attributing a drawing to the correct producer, because participants scored less high when the human hand draws in a relatively mechanical way. Overall, this study suggests that observers are sensitive to subtle kinematic differences between visually similar marks in drawings that have a different motor origin. We offer some possible interpretations inspired by the idea of “motor resonance”. PMID:25014198

  2. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  3. An Experimental Study of the Effect of Out-of-the-Window Cues on Training Novice Pilots on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Khan, M. Javed; Rossi, Marcia; Heath, Bruce; Ali, Syed F.; Ward, Marcus

    2006-01-01

    The effects of out-of-the-window cues on learning a straight-in landing approach and a level 360deg turn by novice pilots on a flight simulator have been investigated. The treatments consisted of training with and without visual cues as well as density of visual cues. The performance of the participants was then evaluated through similar but more challenging tasks. It was observed that the participants in the landing study who trained with visual cues performed poorly than those who trained without the cues. However the performance of those who trained with a faded-cues sequence performed slightly better than those who trained without visual cues. In the level turn study it was observed that those who trained with the visual cues performed better than those who trained without visual cues. The study also showed that those participants who trained with a lower density of cues performed better than those who trained with a higher density of visual cues.

  4. [Visual cuing effect for haptic angle judgment].

    PubMed

    Era, Ataru; Yokosawa, Kazuhiko

    2009-08-01

    We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.

  5. Applied Cliplets-based half-dynamic videos as intervention learning materials to attract the attention of adolescents with autism spectrum disorder to improve their perceptions and judgments of the facial expressions and emotions of others.

    PubMed

    Lee, I-Jui; Chen, Chien-Hsu; Lin, Ling-Yi

    2016-01-01

    Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotional expressions on other people's faces. Increasing evidence indicates that children with ASD might not recognize or understand crucial nonverbal behaviors, which likely causes them to ignore nonverbal gestures and social cues, like facial expressions, that usually aid social interaction. In this study, we used software technology to create half-static and dynamic video materials to teach adolescents with ASD how to become aware of six basic facial expressions observed in real situations. This intervention system provides a half-way point via a dynamic video of a specific element within a static-surrounding frame to strengthen the ability of the six adolescents with ASD to attract their attention on the relevant dynamic facial expressions and ignore irrelevant ones. Using a multiple baseline design across participants, we found that the intervention learning system provided a simple yet effective way for adolescents with ASD to attract their attention on the nonverbal facial cues; the intervention helped them better understand and judge others' facial emotions. We conclude that the limited amount of information with structured and specific close-up visual social cues helped the participants improve judgments of the emotional meaning of the facial expressions of others.

  6. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  7. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  8. Pigeons and humans use action and pose information to categorize complex human behaviors.

    PubMed

    Qadri, Muhammad A J; Cook, Robert G

    2017-02-01

    The biological mechanisms used to categorize and recognize behaviors are poorly understood in both human and non-human animals. Using animated digital models, we have recently shown that pigeons can categorize different locomotive animal gaits and types of complex human behaviors. In the current experiments, pigeons (go/no-go task) and humans (choice task) both learned to conditionally categorize two categories of human behaviors that did not repeat and were comprised of the coordinated motions of multiple limbs. These "martial arts" and "Indian dance" action sequences were depicted by a digital human model. Depending upon whether the model was in motion or not, each species was required to engage in different and opposing responses to the two behavioral categories. Both species learned to conditionally and correctly act on this dynamic and static behavioral information, indicating that both species use a combination of static pose cues that are available from stimulus onset in addition to less rapidly available action information in order to successfully discriminate between the behaviors. Human participants additionally demonstrated a bias towards the dynamic information in the display when re-learning the task. Theories that rely on generalized, non-specific visual mechanisms involving channels for motion and static cues offer a parsimonious account of how humans and pigeons recognize and categorize behaviors within and across species. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    PubMed

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  10. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    PubMed

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  11. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    PubMed

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  12. Effects of Presentation Type and Visual Control in Numerosity Discrimination: Implications for Number Processing?

    PubMed Central

    Smets, Karolien; Moors, Pieter; Reynvoet, Bert

    2016-01-01

    Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967

  13. Visual Cues, Verbal Cues and Child Development

    ERIC Educational Resources Information Center

    Valentini, Nadia

    2004-01-01

    In this article, the author discusses two strategies--visual cues (modeling) and verbal cues (short, accurate phrases) which are related to teaching motor skills in maximizing learning in physical education classes. Both visual and verbal cues are strong influences in facilitating and promoting day-to-day learning. Both strategies reinforce…

  14. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  15. Saccade frequency response to visual cues during gait in Parkinson's disease: the selective role of attention.

    PubMed

    Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn

    2018-04-01

    Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Circadian timed episodic-like memory - a bee knows what to do when, and also where.

    PubMed

    Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu

    2007-10-01

    This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.

  17. Motor resonance may originate from sensorimotor experience.

    PubMed

    Petroni, Agustín; Baguear, Federico; Della-Maggiore, Valeria

    2010-10-01

    In humans, the motor system can be activated by passive observation of actions or static pictures with implied action. The origin of this facilitation is of major interest to the field of motor control. Recently it has been shown that sensorimotor learning can reconfigure the motor system during action observation. Here we tested directly the hypothesis that motor resonance arises from sensorimotor contingencies by measuring corticospinal excitability in response to abstract non-action cues previously associated with an action. Motor evoked potentials were measured from the first dorsal interosseus (FDI) while human subjects observed colored stimuli that had been visually or motorically associated with a finger movement (index or little finger abduction). Corticospinal excitability was higher during the observation of a colored cue that preceded a movement involving the recorded muscle than during the observation of a different colored cue that preceded a movement involving a different muscle. Crucially this facilitation was only observed when the cue was associated with an executed movement but not when it was associated with an observed movement. Our findings provide solid evidence in support of the sensorimotor hypothesis of action observation and further suggest that the physical nature of the observed stimulus mediating this phenomenon may in fact be irrelevant.

  18. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    PubMed

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  19. Static mechanical strain induces capillary endothelial cell cycle re-entry and sprouting.

    PubMed

    Zeiger, A S; Liu, F D; Durham, J T; Jagielska, A; Mahmoodian, R; Van Vliet, K J; Herman, I M

    2016-08-16

    Vascular endothelial cells are known to respond to a range of biochemical and time-varying mechanical cues that can promote blood vessel sprouting termed angiogenesis. It is less understood how these cells respond to sustained (i.e., static) mechanical cues such as the deformation generated by other contractile vascular cells, cues which can change with age and disease state. Here we demonstrate that static tensile strain of 10%, consistent with that exerted by contractile microvascular pericytes, can directly and rapidly induce cell cycle re-entry in growth-arrested microvascular endothelial cell monolayers. S-phase entry in response to this strain correlates with absence of nuclear p27, a cyclin-dependent kinase inhibitor. Furthermore, this modest strain promotes sprouting of endothelial cells, suggesting a novel mechanical 'angiogenic switch'. These findings suggest that static tensile strain can directly stimulate pathological angiogenesis, implying that pericyte absence or death is not necessarily required of endothelial cell re-activation.

  20. Visual flow scene effects on the somatogravic illusion in non-pilots.

    PubMed

    Eriksson, Lars; von Hofsten, Claes; Tribukait, Arne; Eiken, Ola; Andersson, Peter; Hedström, Johan

    2008-09-01

    The somatogravic illusion (SGI) is easily broken when the pilot looks out the aircraft window during daylight flight, but it has proven difficult to break or even reduce the SGI in non-pilots in simulators using synthetic visual scenes. Could visual-flow scenes that accommodate compensatory head movement reduce the SGI in naive subjects? We investigated the effects of visual cues on the SGI induced by a human centrifuge. The subject was equipped with a head-tracked, head-mounted display (HMD) and was seated in a fixed gondola facing the center of rotation. The angular velocity of the centrifuge increased from near zero until a 0.57-G centripetal acceleration was attained, resulting in a tilt of the gravitoinertial force vector, corresponding to a pitch-up of 30 degrees. The subject indicated perceived horizontal continuously by means of a manual adjustable-plate system. We performed two experiments with within-subjects designs. In Experiment 1, the subjects (N = 13) viewed a darkened HMD and a presentation of simple visual flow beneath a horizon. In Experiment 2, the subjects (N = 12) viewed a darkened HMD, a scene including symbology superimposed on simple visual flow and horizon, and this scene without visual flow (static). In Experiment 1, visual flow reduced the SGI from 12.4 +/- 1.4 degrees (mean +/- SE) to 8.7 +/- 1.5 degrees. In Experiment 2, the SGI was smaller in the visual flow condition (9.3 +/- 1.8 degrees) than with the static scene (13.3 +/- 1.7 degrees) and without HMD presentation (14.5 +/- 2.3 degrees), respectively. It is possible to reduce the SGI in non-pilots by means of a synthetic horizon and simple visual flow conveyed by a head-tracked HMD. This may reflect the power of a more intuitive display for reducing the SGI.

  1. Multimodal communication, mismatched messages and the effects of turbidity on the antipredator behavior of the Barton Springs salamander, Eurycea sosorum.

    PubMed

    Zabierek, Kristina C; Gabor, Caitlin R

    2016-09-01

    Prey may use multiple sensory channels to detect predators, whose cues may differ in altered sensory environments, such as turbid conditions. Depending on the environment, prey may use cues in an additive/complementary manner or in a compensatory manner. First, to determine whether the purely aquatic Barton Springs salamander, Eurycea sosorum, show an antipredator response to visual cues, we examined their activity when exposed to either visual cues of a predatory fish (Lepomis cyanellus) or a non-predatory fish (Etheostoma lepidum). Salamanders decreased activity in response to predator visual cues only. Then, we examined the antipredator response of these salamanders to all matched and mismatched combinations of chemical and visual cues of the same predatory and non-predatory fish in clear and low turbidity conditions. Salamanders decreased activity in response to predator chemical cues matched with predator visual cues or mismatched with non-predator visual cues. Salamanders also increased latency to first move to predator chemical cues mismatched with non-predator visual cues. Salamanders decreased activity and increased latency to first move more in clear as opposed to turbid conditions in all treatment combinations. Our results indicate that salamanders under all conditions and treatments preferentially rely on chemical cues to determine antipredator behavior, although visual cues are potentially utilized in conjunction for latency to first move. Our results also have potential conservation implications, as decreased antipredator behavior was seen in turbid conditions. These results reveal complexity of antipredator behavior in response to multiple cues under different environmental conditions, which is especially important when considering endangered species. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. The self in conflict: actors and agency in the mediated sequential Simon task.

    PubMed

    Spapé, Michiel M; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas

    2015-01-01

    Executive control refers to the ability to withstand interference in order to achieve task goals. The effect of conflict adaptation describes that after experiencing interference, subsequent conflict effects are weaker. However, changes in the source of conflict have been found to disrupt conflict adaptation. Previous studies indicated that this specificity is determined by the degree to which one source causes episodic retrieval of a previous source. A virtual reality version of the Simon task was employed to investigate whether changes in a visual representation of the self would similarly affect conflict adaptation. Participants engaged in a mediated Simon task via 3D "avatar" models that either mirrored the participants' movements, or were presented statically. A retrieval cue was implemented as the identity of the avatar: switching it from a male to a female avatar was expected to disrupt the conflict adaptation effect (CAE). The results show that only in static conditions did the CAE depend on the avatar identity, while in dynamic conditions, changes did not cause disruption. We also explored the effect of conflict and adaptation on the degree of movement made with the task-irrelevant hand and replicated the reaction time pattern. The findings add to earlier studies of source-specific conflict adaptation by showing that a visual representation of the self in action can provide a cue that determines episodic retrieval. Furthermore, the novel paradigm is made openly available to the scientific community and is described in its significance for studies of social cognition, cognitive psychology, and human-computer interaction.

  3. Visual cue-specific craving is diminished in stressed smokers.

    PubMed

    Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R

    2017-09-01

    Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.

  4. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    PubMed

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    PubMed

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  6. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    PubMed Central

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444

  7. The effect of contextual sound cues on visual fidelity perception.

    PubMed

    Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam

    2014-01-01

    Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.

  8. Auditory and visual cueing modulate cycling speed of older adults and persons with Parkinson's disease in a Virtual Cycling (V-Cycle) system.

    PubMed

    Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E

    2016-08-19

    Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.

  9. Orientation selectivity sharpens motion detection in Drosophila

    PubMed Central

    Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.

    2015-01-01

    SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048

  10. Scientific visualization of volumetric radar cross section data

    NASA Astrophysics Data System (ADS)

    Wojszynski, Thomas G.

    1992-12-01

    For aircraft design and mission planning, designers, threat analysts, mission planners, and pilots require a Radar Cross Section (RCS) central tendency with its associated distribution about a specified aspect and its relation to a known threat, Historically, RCS data sets have been statically analyzed to evaluate a d profile. However, Scientific Visualization, the application of computer graphics techniques to produce pictures of complex physical phenomena appears to be a more promising tool to interpret this data. This work describes data reduction techniques and a surface rendering algorithm to construct and display a complex polyhedron from adjacent contours of RCS data. Data reduction is accomplished by sectorizing the data and characterizing the statistical properties of the data. Color, lighting, and orientation cues are added to complete the visualization system. The tool may be useful for synthesis, design, and analysis of complex, low observable air vehicles.

  11. Mimicking Atmospheric Flow Conditions to Examine Mosquito Orientation Behavior

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Chun; Vickers, Neil; Hultmark, Marcus

    2017-11-01

    Host-seeking female mosquitoes utilize a variety of sensory cues to locate potential hosts. In addition to visual cues, other signals include CO2 , volatile skin emanations, humidity, and thermal cues, each of which can be considered as passive scalars in the environment, primarily distributed by local flow conditions. The behavior of host-seeking female mosquito vectors can be more thoroughly understood by simulating the natural features of the environment through which they navigate, namely the atmospheric boundary layer. Thus, an exploration and understanding of the dynamics of a scalar plume will not only establish the effect of fluid environment on scalar coherence and distribution, but also provide a bioassay platform for approaches directed at disrupting or preventing the cycle of mosquito-vectored disease transmission. In order to bridge between laboratory findings and the natural, ecologically relevant setting, a unique active flow modulation system consisting of a grid of 60 independently operated paddles was developed. Unlike static grids that generate turbulence within a predefined range of scales, an active grid imposes variable and controllable turbulent structures onto the moving air by synchronized rotation of the paddles at specified frequencies.

  12. Talking heads or talking eyes? Effects of head orientation and sudden onset gaze cues on attention capture.

    PubMed

    van der Wel, Robrecht P; Welsh, Timothy; Böckler, Anne

    2018-01-01

    The direction of gaze towards or away from an observer has immediate effects on attentional processing in the observer. Previous research indicates that faces with direct gaze are processed more efficiently than faces with averted gaze. We recently reported additional processing advantages for faces that suddenly adopt direct gaze (abruptly shift from averted to direct gaze) relative to static direct gaze (always in direct gaze), sudden averted gaze (abruptly shift from direct to averted gaze), and static averted gaze (always in averted gaze). Because changes in gaze orientation in previous study co-occurred with changes in head orientation, it was not clear if the effect is contingent on face or eye processing, or whether it requires both the eyes and the face to provide consistent information. The present study delineates the impact of head orientation, sudden onset motion cues, and gaze cues. Participants completed a target-detection task in which head position remained in a static averted or direct orientation while sudden onset motion and eye gaze cues were manipulated within each trial. The results indicate a sudden direct gaze advantage that resulted from the additive role of motion and gaze cues. Interestingly, the orientation of the face towards or away from the observer did not influence the sudden direct gaze effect, suggesting that eye gaze cues, not face orientation cues, are critical for the sudden direct gaze effect.

  13. Effects of spatial cues on color-change detection in humans

    PubMed Central

    Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.

    2015-01-01

    Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359

  14. Human body segmentation via data-driven graph cut.

    PubMed

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  15. Directed Forgetting and Directed Remembering in Visual Working Memory

    PubMed Central

    Williams, Melonie; Woodman, Geoffrey F.

    2013-01-01

    A defining characteristic of visual working memory is its limited capacity. This means that it is crucial to maintain only the most relevant information in visual working memory. However, empirical research is mixed as to whether it is possible to selectively maintain a subset of the information previously encoded into visual working memory. Here we examined the ability of subjects to use cues to either forget or remember a subset of the information already stored in visual working memory. In Experiment 1, participants were cued to either forget or remember one of two groups of colored squares during a change-detection task. We found that both types of cues aided performance in the visual working memory task, but that observers benefited more from a cue to remember than a cue to forget a subset of the objects. In Experiment 2, we show that the previous findings, which indicated that directed-forgetting cues are ineffective, were likely due to the presence of invalid cues that appear to cause observers to disregard such cues as unreliable. In Experiment 3, we recorded event-related potentials (ERPs) and show that an electrophysiological index of focused maintenance is elicited by cues that indicate which subset of information in visual working memory needs to be remembered, ruling out alternative explanations of the behavioral effects of retention-interval cues. The present findings demonstrate that observers can focus maintenance mechanisms on specific objects in visual working memory based on cues indicating future task relevance. PMID:22409182

  16. Interplay of Gravicentric, Egocentric, and Visual Cues About the Vertical in the Control of Arm Movement Direction.

    PubMed

    Bock, Otmar; Bury, Nils

    2018-03-01

    Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.

  17. Individual Sensitivity to Spectral and Temporal Cues in Listeners With Hearing Impairment

    PubMed Central

    Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal (amplitude envelope). Spectral and/or temporal dimensions of synthetic speech were systematically manipulated along a continuum, and recognition was measured using the manipulated stimuli. Level was controlled to ensure cue audibility. Discriminant function analysis was used to determine to what degree spectral and temporal information contributed to the identification of each stimulus. Results Listeners with normal hearing were influenced to a greater extent by spectral cues for all stimuli. Listeners with hearing impairment generally utilized spectral cues when the information was static (spectral shape) but used temporal cues when the information was dynamic (formant transition). The relative use of spectral and temporal dimensions varied among individuals, especially among listeners with hearing loss. Conclusion Information about spectral and temporal cue use may aid in identifying listeners who rely to a greater extent on particular acoustic cues and applying that information toward therapeutic interventions. PMID:25629388

  18. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  19. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  20. Temporal and peripheral extraction of contextual cues from scenes during visual search.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-02-01

    Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.

  1. Effects of visual and motion simulation cueing systems on pilot performance during takeoffs with engine failures

    NASA Technical Reports Server (NTRS)

    Parris, B. L.; Cook, A. M.

    1978-01-01

    Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.

  2. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  3. Gravity in the Brain as a Reference for Space and Time Perception.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka

    2015-01-01

    Moving and interacting with the environment require a reference for orientation and a scale for calibration in space and time. There is a wide variety of environmental clues and calibrated frames at different locales, but the reference of gravity is ubiquitous on Earth. The pull of gravity on static objects provides a plummet which, together with the horizontal plane, defines a three-dimensional Cartesian frame for visual images. On the other hand, the gravitational acceleration of falling objects can provide a time-stamp on events, because the motion duration of an object accelerated by gravity over a given path is fixed. Indeed, since ancient times, man has been using plumb bobs for spatial surveying, and water clocks or pendulum clocks for time keeping. Here we review behavioral evidence in favor of the hypothesis that the brain is endowed with mechanisms that exploit the presence of gravity to estimate the spatial orientation and the passage of time. Several visual and non-visual (vestibular, haptic, visceral) cues are merged to estimate the orientation of the visual vertical. However, the relative weight of each cue is not fixed, but depends on the specific task. Next, we show that an internal model of the effects of gravity is combined with multisensory signals to time the interception of falling objects, to time the passage through spatial landmarks during virtual navigation, to assess the duration of a gravitational motion, and to judge the naturalness of periodic motion under gravity.

  4. Motion/visual cueing requirements for vortex encounters during simulated transport visual approach and landing

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Bowles, R. L.

    1983-01-01

    This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.

  5. Love and fear of heights: the pathophysiology and psychology of height imbalance.

    PubMed

    Salassa, John R; Zapala, David A

    2009-01-01

    Individual psychological responses to heights vary on a continuum from acrophobia to height intolerance, height tolerance, and height enjoyment. This paper reviews the English literature and summarizes the physiologic and psychological factors that generate different responses to heights while standing still in a static or motionless environment. Perceptual cues to height arise from vision. Normal postural sway of 2 cm for peripheral objects within 3 m increases as eye-object distance increases. Postural sway >10 cm can result in a fall. A minimum of 20 minutes of peripheral retinal arc is required to detect motion. Trigonometry dictates that a 20-minute peripheral retinal arch can no longer be achieved in a standing position at an eye-object distance of >20 m. At this distance, visual cues conflict with somatosensory and vestibular inputs, resulting in variable degrees of imbalance. Co-occurring deficits in the visual, vestibular, and somatosensory systems can significantly increase height imbalance. An individual's psychological makeup, influenced by learned and genetic factors, can influence reactions to height imbalance. Enhancing peripheral vision and vestibular, proprioceptive, and haptic functions may improve height imbalance. Psychotherapy may improve the troubling subjective sensations to heights.

  6. Media/Device Configurations for Platoon Leader Tactical Training

    DTIC Science & Technology

    1985-02-01

    munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual

  7. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  8. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    PubMed

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  9. Exposure to visual cues of pathogen contagion changes preferences for masculinity and symmetry in opposite-sex faces.

    PubMed

    Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C

    2011-07-07

    Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident.

  10. Exposure to visual cues of pathogen contagion changes preferences for masculinity and symmetry in opposite-sex faces

    PubMed Central

    Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.

    2011-01-01

    Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident. PMID:21123269

  11. Context-dependent arm pointing adaptation

    NASA Technical Reports Server (NTRS)

    Seidler, R. D.; Bloomberg, J. J.; Stelmach, G. E.

    2001-01-01

    We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display; the target orientation group had the home and target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain; the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.

  12. Context-Dependent Arm Pointing Adaptation

    NASA Technical Reports Server (NTRS)

    Seidler, R. D.; Bloomberg, J. J.; Stelmach, G. E.

    2000-01-01

    We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display, the target orientation group had the home & target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain, the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue. . compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the.arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.

  13. The effect of visual context on manual localization of remembered targets

    NASA Technical Reports Server (NTRS)

    Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.

    1997-01-01

    This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.

  14. The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults

    PubMed Central

    Cortese, Bernadette M.; Uhde, Thomas W.; Brady, Kathleen T.; McClernon, F. Joseph; Yang, Qing X.; Collins, Heather R.; LeMatty, Todd; Hartwell, Karen J.

    2015-01-01

    Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor + picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multi-sensory, but not unisensory cues, was significantly related to participants’ level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. PMID:26475784

  15. The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults.

    PubMed

    Cortese, Bernadette M; Uhde, Thomas W; Brady, Kathleen T; McClernon, F Joseph; Yang, Qing X; Collins, Heather R; LeMatty, Todd; Hartwell, Karen J

    2015-12-30

    Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor+picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multisensory, but not unisensory cues, was significantly related to participants' level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Cue-Reactive Rationality, Visual Imagery and Volitional Control Predict Cue-Reactive Urge to Gamble in Poker-Machine Gamblers.

    PubMed

    Clark, Gavin I; Rock, Adam J; McKeith, Charles F A; Coventry, William L

    2017-09-01

    Poker-machine gamblers have been demonstrated to report increases in the urge to gamble following exposure to salient gambling cues. However, the processes which contribute to this urge to gamble remain to be understood. The present study aimed to investigate whether changes in the conscious experience of visual imagery, rationality and volitional control (over one's thoughts, images and attention) predicted changes in the urge to gamble following exposure to a gambling cue. Thirty-one regular poker-machine gamblers who reported at least low levels of problem gambling on the Problem Gambling Severity Index (PGSI), were recruited to complete an online cue-reactivity experiment. Participants completed the PGSI, the visual imagery, rationality and volitional control subscales of the Phenomenology of Consciousness Inventory (PCI), and a visual analogue scale (VAS) assessing urge to gamble. Participants completed the PCI subscales and VAS at baseline, following a neutral video cue and following a gambling video cue. Urge to gamble was found to significantly increase from neutral cue to gambling cue (while controlling for baseline urge) and this increase was predicted by PGSI score. After accounting for the effects of problem-gambling severity, cue-reactive visual imagery, rationality and volitional control significantly improved the prediction of cue-reactive urge to gamble. The small sample size and limited participant characteristic data restricts the generalizability of the findings. Nevertheless, this is the first study to demonstrate that changes in the subjective experience of visual imagery, volitional control and rationality predict changes in the urge to gamble from neutral to gambling cue. The results suggest that visual imagery, rationality and volitional control may play an important role in the experience of the urge to gamble in poker-machine gamblers.

  17. The Effects of Visual Beats on Prosodic Prominence: Acoustic Analyses, Auditory Perception and Visual Perception

    ERIC Educational Resources Information Center

    Krahmer, Emiel; Swerts, Marc

    2007-01-01

    Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…

  18. Visual Features Involving Motion Seen from Airport Control Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion

    2010-01-01

    Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.

  19. Individual Sensitivity to Spectral and Temporal Cues in Listeners with Hearing Impairment

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose: The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method: Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal…

  20. The neural basis of form and form-motion integration from static and dynamic translational Glass patterns: A rTMS investigation.

    PubMed

    Pavan, Andrea; Ghin, Filippo; Donato, Rita; Campana, Gianluca; Mather, George

    2017-08-15

    A long-held view of the visual system is that form and motion are independently analysed. However, there is physiological and psychophysical evidence of early interaction in the processing of form and motion. In this study, we used a combination of Glass patterns (GPs) and repetitive Transcranial Magnetic Stimulation (rTMS) to investigate in human observers the neural mechanisms underlying form-motion integration. GPs consist of randomly distributed dot pairs (dipoles) that induce the percept of an oriented stimulus. GPs can be either static or dynamic. Dynamic GPs have both a form component (i.e., orientation) and a non-directional motion component along the orientation axis. GPs were presented in two temporal intervals and observers were asked to discriminate the temporal interval containing the most coherent GP. rTMS was delivered over early visual area (V1/V2) and over area V5/MT shortly after the presentation of the GP in each interval. The results showed that rTMS applied over early visual areas affected the perception of static GPs, but the stimulation of area V5/MT did not affect observers' performance. On the other hand, rTMS was delivered over either V1/V2 or V5/MT strongly impaired the perception of dynamic GPs. These results suggest that early visual areas seem to be involved in the processing of the spatial structure of GPs, and interfering with the extraction of the global spatial structure also affects the extraction of the motion component, possibly interfering with early form-motion integration. However, visual area V5/MT is likely to be involved only in the processing of the motion component of dynamic GPs. These results suggest that motion and form cues may interact as early as V1/V2. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Listeners' expectation of room acoustical parameters based on visual cues

    NASA Astrophysics Data System (ADS)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer. The addressed visual makeup of the performer included: (1) an actual video of the performance, (2) a surrogate image of the performance, for example a loudspeaker's image reproducing the performance, (3) no visual image of the performance (empty room), or (4) a multi-source visual stimulus (actual video of the performance coupled with two images of loudspeakers positioned to the left and right of the performer). For this experiment, perceived auditory events of sound were measured in terms of two subjective spatial metrics: Listener Envelopment (LEV) and Apparent Source Width (ASW) These metrics were hypothesized to be dependent on the visual imagery of the presented performance. Data was also collected by participants matching direct and reverberant sound levels for the presented audio-visual scenes. In the final experiment, participants judged spatial expectations of an ensemble of musicians presented in the five physical spaces from Experiment 1. Supporting data was accumulated in two stages. First, participants were given an audio-visual matching test, in which they were instructed to align the auditory width of a performing ensemble to a varying set of audio and visual cues. In the second stage, a conjoint analysis design paradigm was explored to extrapolate the relative magnitude of explored audio-visual factors in affecting three assessed response criteria: Congruency (the perceived match-up of the auditory and visual cues in the assessed performance), ASW and LEV. Results show that both auditory and visual factors affect the collected responses, and that the two sensory modalities coincide in distinct interactions. This study reveals participant resiliency in the presence of forced auditory-visual mismatch: Participants are able to adjust the acoustic component of the cross-modal environment in a statistically similar way despite randomized starting values for the monitored parameters. Subjective results of the experiments are presented along with objective measurements for verification.

  2. Perceptual transparency from image deformation.

    PubMed

    Kawabe, Takahiro; Maruya, Kazushi; Nishida, Shin'ya

    2015-08-18

    Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.

  3. Application of Visual Cues on 3D Dynamic Visualizations for Engineering Technology Students and Effects on Spatial Visualization Ability: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2016-01-01

    Several theorists believe that different types of visual cues influence cognition and behavior through learned associations; however, research provides inconsistent results. Considering this, a quasi-experimental study was done to determine if there are significant positive effects of visual cues (color blue) and to identify if a positive increase…

  4. Visual form predictions facilitate auditory processing at the N1.

    PubMed

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  5. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Benolken, Martha S.

    1993-01-01

    The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  6. The effect of contextual cues on the encoding of motor memories.

    PubMed

    Howard, Ian S; Wolpert, Daniel M; Franklin, David W

    2013-05-01

    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.

  7. Domain general learning: Infants use social and non-social cues when learning object statistics

    PubMed Central

    Barry, Ryan A.; Graf Estes, Katharine; Rivera, Susan M.

    2015-01-01

    Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants’ attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions. PMID:25999879

  8. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  9. Cross-Sensory Transfer of Reference Frames in Spatial Memory

    ERIC Educational Resources Information Center

    Kelly, Jonathan W.; Avraamides, Marios N.

    2011-01-01

    Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…

  10. Attentional bias to food-related visual cues: is there a role in obesity?

    PubMed

    Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M

    2015-02-01

    The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.

  11. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  12. Capuchin monkeys (Cebus apella) use positive, but not negative, auditory cues to infer food location.

    PubMed

    Heimbauer, Lisa A; Antworth, Rebecca L; Owren, Michael J

    2012-01-01

    Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present ("positive" cuing) or absent ("negative" cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.

  13. Differential processing of binocular and monocular gloss cues in human visual cortex

    PubMed Central

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  14. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  15. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  16. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  17. Dynamic Vibrotactile Signals for Forward Collision Avoidance Warning Systems

    PubMed Central

    Meng, Fanxing; Gray, Rob; Ho, Cristy; Ahtamad, Mujthaba

    2015-01-01

    Objective: Four experiments were conducted in order to assess the effectiveness of dynamic vibrotactile collision-warning signals in potentially enhancing safe driving. Background: Auditory neuroscience research has demonstrated that auditory signals that move toward a person are more salient than those that move away. If this looming effect were found to extend to the tactile modality, then it could be utilized in the context of in-car warning signal design. Method: The effectiveness of various vibrotactile warning signals was assessed using a simulated car-following task. The vibrotactile warning signals consisted of dynamic toward-/away-from-torso cues (Experiment 1), dynamic versus static vibrotactile cues (Experiment 2), looming-intensity- and constant-intensity-toward-torso cues (Experiment 3), and static cues presented on the hands or on the waist, having either a low or high vibration intensity (Experiment 4). Results: Braking reaction times (BRTs) were significantly faster for toward-torso as compared to away-from-torso cues (Experiments 1 and 2) and static cues (Experiment 2). This difference could not have been attributed to differential responses to signals delivered to different body parts (i.e., the waist vs. hands; Experiment 4). Embedding a looming-intensity signal into the toward-torso signal did not result in any additional BRT benefits (Experiment 3). Conclusion: Dynamic vibrotactile cues that feel as though they are approaching the torso can be used to communicate information concerning external events, resulting in a significantly faster reaction time to potential collisions. Application: Dynamic vibrotactile warning signals that move toward the body offer great potential for the design of future in-car collision-warning system. PMID:25850161

  18. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion

    PubMed Central

    Calabro, Finnegan J.; Vaina, Lucia Maria

    2016-01-01

    Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114

  19. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.

    PubMed

    Calabro, Finnegan J; Vaina, Lucia Maria

    2016-05-27

    BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.

  20. The Effects of Explicit Visual Cues in Reading Biological Diagrams

    ERIC Educational Resources Information Center

    Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua

    2017-01-01

    Drawing on cognitive theories, this study intends to investigate the effects of explicit visual cues which have been proposed as a critical factor in facilitating understanding of biological images. Three diagrams from Taiwanese textbooks with implicit visual cues, involving the concepts of biological classification systems, fish taxonomy, and…

  1. Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus

    PubMed Central

    Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.

    2013-01-01

    Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713

  2. Cannabis cue-induced brain activation correlates with drug craving in limbic and visual salience regions: Preliminary results

    PubMed Central

    Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.

    2013-01-01

    Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535

  3. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  4. Seeing is believing: information content and behavioural response to visual and chemical cues

    PubMed Central

    Gonzálvez, Francisco G.; Rodríguez-Gironés, Miguel A.

    2013-01-01

    Predator avoidance and foraging often pose conflicting demands. Animals can decrease mortality risk searching for predators, but searching decreases foraging time and hence intake. We used this principle to investigate how prey should use information to detect, assess and respond to predation risk from an optimal foraging perspective. A mathematical model showed that solitary bees should increase flower examination time in response to predator cues and that the rate of false alarms should be negatively correlated with the relative value of the flower explored. The predatory ant, Oecophylla smaragdina, and the harmless ant, Polyrhachis dives, differ in the profile of volatiles they emit and in their visual appearance. As predicted, the solitary bee Nomia strigata spent more time examining virgin flowers in presence of predator cues than in their absence. Furthermore, the proportion of flowers rejected decreased from morning to noon, as the relative value of virgin flowers increased. In addition, bees responded differently to visual and chemical cues. While chemical cues induced bees to search around flowers, bees detecting visual cues hovered in front of them. These strategies may allow prey to identify the nature of visual cues and to locate the source of chemical cues. PMID:23698013

  5. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  6. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Benolken, M. S.

    1995-01-01

    The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  7. Multisensory Integration and Internal Models for Sensing Gravity Effects in Primates

    PubMed Central

    Lacquaniti, Francesco; La Scaleia, Barbara; Maffei, Vincenzo

    2014-01-01

    Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects. PMID:25061610

  8. Multisensory integration and internal models for sensing gravity effects in primates.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka

    2014-01-01

    Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.

  9. Differential processing of binocular and monocular gloss cues in human visual cortex.

    PubMed

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  10. Out of sight, out of mind: racial retrieval cues increase the accessibility of social justice concepts.

    PubMed

    Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T

    2017-09-01

    Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.

  11. Capture of Xylosandrus crassiusculus and other Scolytinae (Coleoptera, Curculionidae) in response to visual and volatile cues

    USDA-ARS?s Scientific Manuscript database

    In June and July 2011 traps were deployed in Tuskegee National Forest, Macon County, Alabama to test the influence of chemical and visual cues on for the capture of bark and ambrosia beetles (Coleoptera: Curculionidae: Scolytinae). \\using chemical and visual cues. The first experiment investigated t...

  12. Impact of Visual, Vocal, and Lexical Cues on Judgments of Counselor Qualities

    ERIC Educational Resources Information Center

    Strahan, Carole; Zytowski, Donald G.

    1976-01-01

    Undergraduate students (N=130) rated Carl Rogers via visual, lexical, vocal, or vocal-lexical communication channels. Lexical cues were more important in creating favorable impressions among females. Subsequent exposure to combined visual-vocal-lexical cues resulted in warmer and less distant ratings, but not on a consistent basis. (Author)

  13. Visual and auditory cue integration for the generation of saccadic eye movements in monkeys and lever pressing in humans.

    PubMed

    Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M

    2012-08-01

    This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  14. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  15. Ocean acidification and responses to predators: can sensory redundancy reduce the apparent impacts of elevated CO2 on fish?

    PubMed

    Lönnstedt, Oona M; Munday, Philip L; McCormick, Mark I; Ferrari, Maud C O; Chivers, Douglas P

    2013-09-01

    Carbon dioxide (CO2) levels in the atmosphere and surface ocean are rising at an unprecedented rate due to sustained and accelerating anthropogenic CO2 emissions. Previous studies have documented that exposure to elevated CO2 causes impaired antipredator behavior by coral reef fish in response to chemical cues associated with predation. However, whether ocean acidification will impair visual recognition of common predators is currently unknown. This study examined whether sensory compensation in the presence of multiple sensory cues could reduce the impacts of ocean acidification on antipredator responses. When exposed to seawater enriched with levels of CO2 predicted for the end of this century (880 μatm CO2), prey fish completely lost their response to conspecific alarm cues. While the visual response to a predator was also affected by high CO2, it was not entirely lost. Fish exposed to elevated CO2, spent less time in shelter than current-day controls and did not exhibit antipredator signaling behavior (bobbing) when multiple predator cues were present. They did, however, reduce feeding rate and activity levels to the same level as controls. The results suggest that the response of fish to visual cues may partially compensate for the lack of response to chemical cues. Fish subjected to elevated CO2 levels, and exposed to chemical and visual predation cues simultaneously, responded with the same intensity as controls exposed to visual cues alone. However, these responses were still less than control fish simultaneously exposed to chemical and visual predation cues. Consequently, visual cues improve antipredator behavior of CO2 exposed fish, but do not fully compensate for the loss of response to chemical cues. The reduced ability to correctly respond to a predator will have ramifications for survival in encounters with predators in the field, which could have repercussions for population replenishment in acidified oceans.

  16. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    PubMed Central

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  17. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.

    PubMed

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.

  18. A magnetoencephalography study of visual processing of pain anticipation.

    PubMed

    Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C

    2014-07-15

    Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.

  19. Unconscious cues bias first saccades in a free-saccade task.

    PubMed

    Huang, Yu-Feng; Tan, Edlyn Gui Fang; Soon, Chun Siong; Hsieh, Po-Jang

    2014-10-01

    Visual-spatial attention can be biased towards salient visual information without visual awareness. It is unclear, however, whether such bias can further influence free-choices such as saccades in a free viewing task. In our experiment, we presented visual cues below awareness threshold immediately before people made free saccades. Our results showed that masked cues could influence the direction and latency of the first free saccade, suggesting that salient visual information can unconsciously influence free actions. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention

    PubMed Central

    Graham, Reiko; LaBar, Kevin S.

    2012-01-01

    The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic versus static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. PMID:22285906

  1. Suggested Interactivity: Seeking Perceived Affordances for Information Visualization.

    PubMed

    Boy, Jeremy; Eveillard, Louis; Detienne, Françoise; Fekete, Jean-Daniel

    2016-01-01

    In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.

  2. Retrospective cues based on object features improve visual working memory performance in older adults.

    PubMed

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  3. Chemical and visual communication during mate searching in rock shrimp.

    PubMed

    Díaz, Eliecer R; Thiel, Martin

    2004-06-01

    Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.

  4. Can Short Duration Visual Cues Influence Students' Reasoning and Eye Movements in Physics Problems?

    ERIC Educational Resources Information Center

    Madsen, Adrian; Rouinfar, Amy; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay

    2013-01-01

    We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the…

  5. Sensitivity to Visual Prosodic Cues in Signers and Nonsigners

    ERIC Educational Resources Information Center

    Brentari, Diane; Gonzalez, Carolina; Seidl, Amanda; Wilbur, Ronnie

    2011-01-01

    Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested…

  6. Con-Text: Text Detection for Fine-grained Object Classification.

    PubMed

    Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo

    2017-05-24

    This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).

  7. Head Stability and Head-Trunk Coordination in Horseback Riders: The Contribution of Visual Information According to Expertise

    PubMed Central

    Olivier, Agnès; Faugloire, Elise; Lejeune, Laure; Biau, Sophie; Isableu, Brice

    2017-01-01

    Maintaining equilibrium while riding a horse is a challenging task that involves complex sensorimotor processes. We evaluated the relative contribution of visual information (static or dynamic) to horseback riders' postural stability (measured from the variability of segment position in space) and the coordination modes they adopted to regulate balance according to their level of expertise. Riders' perceptual typologies and their possible relation to postural stability were also assessed. Our main assumption was that the contribution of visual information to postural control would be reduced among expert riders in favor of vestibular and somesthetic reliance. Twelve Professional riders and 13 Club riders rode an equestrian simulator at a gallop under four visual conditions: (1) with the projection of a simulated scene reproducing what a rider sees in the real context of a ride in an outdoor arena, (2) under stroboscopic illumination, preventing access to dynamic visual cues, (3) in normal lighting but without the projected scene (i.e., without the visual consequences of displacement) and (4) with no visual cues. The variability of the position of the head, upper trunk and lower trunk was measured along the anteroposterior (AP), mediolateral (ML), and vertical (V) axes. We computed discrete relative phase to assess the coordination between pairs of segments in the anteroposterior axis. Visual field dependence-independence was evaluated using the Rod and Frame Test (RFT). The results showed that the Professional riders exhibited greater overall postural stability than the Club riders, revealed mainly in the AP axis. In particular, head variability was lower in the Professional riders than in the Club riders in visually altered conditions, suggesting a greater ability to use vestibular and somesthetic information according to task constraints with expertise. In accordance with this result, RFT perceptual scores revealed that the Professional riders were less dependent on the visual field than were the Club riders. Finally, the Professional riders exhibited specific coordination modes that, unlike the Club riders, departed from pure in-phase and anti-phase patterns and depended on visual conditions. The present findings provide evidence of major differences in the sensorimotor processes contributing to postural control with expertise in horseback riding. PMID:28194100

  8. Location cue validity affects inhibition of return of visual processing.

    PubMed

    Wright, R D; Richard, C M

    2000-01-01

    Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.

  9. Soldier-Robot Team Communication: An Investigation of Exogenous Orienting Visual Display Cues and Robot Reporting Preferences

    DTIC Science & Technology

    2018-02-12

    usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4

  10. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    PubMed

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  11. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete

    PubMed Central

    Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.

    2013-01-01

    Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140

  12. Does Vaping in E-Cigarette Advertisements Affect Tobacco Smoking Urge, Intentions, and Perceptions in Daily, Intermittent, and Former Smokers?

    PubMed

    Maloney, Erin K; Cappella, Joseph N

    2016-01-01

    Visual depictions of vaping in electronic cigarette advertisements may serve as smoking cues to smokers and former smokers, increasing urge to smoke and smoking behavior, and decreasing self-efficacy, attitudes, and intentions to quit or abstain. After assessing baseline urge to smoke, 301 daily smokers, 272 intermittent smokers, and 311 former smokers were randomly assigned to view three e-cigarette commercials with vaping visuals (the cue condition) or without vaping visuals (the no-cue condition), or to answer unrelated media use questions (the no-ad condition). Participants then answered a posttest questionnaire assessing the outcome variables of interest. Relative to other conditions, in the cue condition, daily smokers reported greater urge to smoke a tobacco cigarette and a marginally significantly greater incidence of actually smoking a tobacco cigarette during the experiment. Former smokers in the cue condition reported lower intentions to abstain from smoking than former smokers in other conditions. No significant differences emerged among intermittent smokers across conditions. These data suggest that visual depictions of vaping in e-cigarette commercials increase daily smokers' urge to smoke cigarettes and may lead to more actual smoking behavior. For former smokers, these cues in advertising may undermine abstinence efforts. Intermittent smokers did not appear to be reactive to these cues. A lack of significant differences between participants in the no-cue and no-ad conditions compared to the cue condition suggests that visual depictions of e-cigarettes and vaping function as smoking cues, and cue reactivity is the mechanism through which these effects were obtained.

  13. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    PubMed

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Sight or Scent: Lemur Sensory Reliance in Detecting Food Quality Varies with Feeding Ecology

    PubMed Central

    Rushmore, Julie; Leonhardt, Sara D.; Drea, Christine M.

    2012-01-01

    Visual and olfactory cues provide important information to foragers, yet we know little about species differences in sensory reliance during food selection. In a series of experimental foraging studies, we examined the relative reliance on vision versus olfaction in three diurnal, primate species with diverse feeding ecologies, including folivorous Coquerel's sifakas (Propithecus coquereli), frugivorous ruffed lemurs (Varecia variegata spp), and generalist ring-tailed lemurs (Lemur catta). We used animals with known color-vision status and foods for which different maturation stages (and hence quality) produce distinct visual and olfactory cues (the latter determined chemically). We first showed that lemurs preferentially selected high-quality foods over low-quality foods when visual and olfactory cues were simultaneously available for both food types. Next, using a novel apparatus in a series of discrimination trials, we either manipulated food quality (while holding sensory cues constant) or manipulated sensory cues (while holding food quality constant). Among our study subjects that showed relatively strong preferences for high-quality foods, folivores required both sensory cues combined to reliably identify their preferred foods, whereas generalists could identify their preferred foods using either cue alone, and frugivores could identify their preferred foods using olfactory, but not visual, cues alone. Moreover, when only high-quality foods were available, folivores and generalists used visual rather than olfactory cues to select food, whereas frugivores used both cue types equally. Lastly, individuals in all three of the study species predominantly relied on sight when choosing between low-quality foods, but species differed in the strength of their sensory biases. Our results generally emphasize visual over olfactory reliance in foraging lemurs, but we suggest that the relative sensory reliance of animals may vary with their feeding ecology. PMID:22870229

  15. The time course of protecting a visual memory representation from perceptual interference

    PubMed Central

    van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian

    2015-01-01

    Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555

  16. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. Accessing long-term memory representations during visual change detection.

    PubMed

    Beck, Melissa R; van Lamsweerde, Amanda E

    2011-04-01

    In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.

  18. Role of Self-Generated Odor Cues in Contextual Representation

    PubMed Central

    Aikath, Devdeep; Weible, Aldis P; Rowland, David C; Kentros, Clifford G

    2014-01-01

    As first demonstrated in the patient H.M., the hippocampus is critically involved in forming episodic memories, the recall of “what” happened “where” and “when.” In rodents, the clearest functional correlate of hippocampal primary neurons is the place field: a cell fires predominantly when the animal is in a specific part of the environment, typically defined relative to the available visuospatial cues. However, rodents have relatively poor visual acuity. Furthermore, they are highly adept at navigating in total darkness. This raises the question of how other sensory modalities might contribute to a hippocampal representation of an environment. Rodents have a highly developed olfactory system, suggesting that cues such as odor trails may be important. To test this, we familiarized mice to a visually cued environment over a number of days while maintaining odor cues. During familiarization, self-generated odor cues unique to each animal were collected by re-using absorbent paperboard flooring from one session to the next. Visual and odor cues were then put in conflict by counter-rotating the recording arena and the flooring. Perhaps surprisingly, place fields seemed to follow the visual cue rotation exclusively, raising the question of whether olfactory cues have any influence at all on a hippocampal spatial representation. However, subsequent removal of the familiar, self-generated odor cues severely disrupted both long-term stability and rotation to visual cues in a novel environment. Our data suggest that odor cues, in the absence of additional rule learning, do not provide a discriminative spatial signal that anchors place fields. Such cues do, however, become integral to the context over time and exert a powerful influence on the stability of its hippocampal representation. © 2014 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:24753119

  19. Low-level visual attention and its relation to joint attention in autism spectrum disorder.

    PubMed

    Jaworski, Jessica L Bean; Eigsti, Inge-Marie

    2017-04-01

    Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.

  20. Phasic alertness cues modulate visual processing speed in healthy aging.

    PubMed

    Haupt, Marleen; Sorg, Christian; Napiórkowski, Natan; Finke, Kathrin

    2018-05-31

    Warning signals temporarily increase the rate of visual information in younger participants and thus optimize perception in critical situations. It is unclear whether such important preparatory processes are preserved in healthy aging. We parametrically assessed the effects of auditory alertness cues on visual processing speed and their time course using a whole report paradigm based on the computational Theory of Visual Attention. We replicated prior findings of significant alerting benefits in younger adults. In conditions with short cue-target onset asynchronies, this effect was baseline-dependent. As younger participants with high baseline speed did not show a profit, an inverted U-shaped function of phasic alerting and visual processing speed was implied. Older adults also showed a significant cue-induced benefit. Bayesian analyses indicated that the cueing benefit on visual processing speed was comparably strong across age groups. Our results indicate that in aging individuals, comparable to younger ones, perception is active and increased expectancy of the appearance of a relevant stimulus can increase the rate of visual information uptake. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  2. A miniaturized bioreactor system for the evaluation of cell interaction with designed substrates in perfusion culture.

    PubMed

    Sun, T; Donoghue, P S; Higginson, J R; Gadegaard, N; Barnett, S C; Riehle, M O

    2012-12-01

    In tissue engineering, chemical and topographical cues are normally developed using static cell cultures but then applied directly to tissue cultures in three dimensions (3D) and under perfusion. As human cells are very sensitive to changes in the culture environment, it is essential to evaluate the performance of any such cues in a perfused environment before they are applied to tissue engineering. Thus, the aim of this research was to bridge the gap between static and perfusion cultures by addressing the effect of perfusion on cell cultures within 3D scaffolds. For this we developed a scaled-down bioreactor system, which allows evaluation of the effectiveness of various chemical and topographical cues incorporated into our previously developed tubular ε-polycaprolactone scaffold under perfused conditions. Investigation of two exemplary cell types (fibroblasts and cortical astrocytes) using the miniaturized bioreactor indicated that: (a) quick and firm cell adhesion in the 3D scaffold was critical for cell survival in perfusion culture compared with static culture; thus, cell-seeding procedures for static cultures might not be applicable, therefore it was necessary to re-evaluate cell attachment on different surfaces under perfused conditions before a 3D scaffold was applied for tissue cultures; (b) continuous medium perfusion adversely influenced cell spread and survival, which could be balanced by intermittent perfusion; (c) micro-grooves still maintained their influences on cell alignment under perfused conditions, while medium perfusion demonstrated additional influence on fibroblast alignment but not on astrocyte alignment on grooved substrates. This research demonstrated that the mini-bioreactor system is crucial for the development of functional scaffolds with suitable chemical and topographical cues by bridging the gap between static culture and perfusion culture. Copyright © 2011 John Wiley & Sons, Ltd.

  3. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    PubMed Central

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353

  4. The Effects of Spatial Endogenous Pre-cueing across Eccentricities.

    PubMed

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.

  5. Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2014-01-01

    Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…

  6. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  7. Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.

    PubMed

    Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu

    2015-09-30

    Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.

  8. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    PubMed

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.

  9. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343

  10. Visual selective attention in amnestic mild cognitive impairment.

    PubMed

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  12. Opposite Effects of Visual Cueing During Writing-Like Movements of Different Amplitudes in Parkinson's Disease.

    PubMed

    Nackaerts, Evelien; Nieuwboer, Alice; Broeder, Sanne; Smits-Engelsman, Bouwien C M; Swinnen, Stephan P; Vandenberghe, Wim; Heremans, Elke

    2016-06-01

    Handwriting is often impaired in Parkinson's disease (PD). Several studies have shown that writing in PD benefits from the use of cues. However, this was typically studied with writing and drawing sizes that are usually not used in daily life. This study examines the effect of visual cueing on a prewriting task at small amplitudes (≤1.0 cm) in PD patients and healthy controls to better understand the working action of cueing for writing. A total of 15 PD patients and 15 healthy, age-matched controls performed a prewriting task at 0.6 cm and 1.0 cm in the presence and absence of visual cues (target lines). Writing amplitude, variability of amplitude, and speed were chosen as dependent variables, measured using a newly developed touch-sensitive tablet. Cueing led to immediate improvements in writing size, variability of writing size, and speed in both groups in the 1.0 cm condition. However, when writing at 0.6 cm with cues, a decrease in writing size was apparent in both groups (P < .001) and the difference in variability of amplitude between cued and uncued writing disappeared. In addition, the writing speed of controls decreased when the cue was present. Visual target lines of 1.0 cm improved the writing of sequential loops in contrast to lines spaced at 0.6 cm. These results illustrate that, unlike for gait, visual cueing for fine-motor tasks requires a differentiated approach, taking into account the possible increases of accuracy constraints imposed by cueing. © The Author(s) 2015.

  13. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease.

    PubMed

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.

  14. Analyzing the Role of Visual Cues in Developing Prediction-Making Skills of Third- and Ninth-Grade English Language Learners

    ERIC Educational Resources Information Center

    Campbell, Emily; Cuba, Melissa

    2015-01-01

    The goal of this action research is to increase student awareness of context clues, with an emphasis on student use of visual cues in making predictions. Visual cues in the classroom were used to differentiate according to the needs of student demographics (Herrera, Perez, & Escamilla, 2010). The purpose of this intervention was to improve…

  15. The Effects of Various Fidelity Factors on Simulated Helicopter Hover

    DTIC Science & Technology

    1981-01-01

    18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation

  16. Learning from Instructional Animations: How Does Prior Knowledge Mediate the Effect of Visual Cues?

    ERIC Educational Resources Information Center

    Arslan-Ari, I.

    2018-01-01

    The purpose of this study was to investigate the effects of cueing and prior knowledge on learning and mental effort of students studying an animation with narration. This study employed a 2 (no cueing vs. visual cueing) × 2 (low vs. high prior knowledge) between-subjects factorial design. The results revealed a significant interaction effect…

  17. Anemonefishes rely on visual and chemical cues to correctly identify conspecifics

    NASA Astrophysics Data System (ADS)

    Johnston, Nicole K.; Dixson, Danielle L.

    2017-09-01

    Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.

  18. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  19. Laserlight cues for gait freezing in Parkinson's disease: an open-label study.

    PubMed

    Donovan, S; Lim, C; Diaz, N; Browner, N; Rose, P; Sudarsky, L R; Tarsy, D; Fahn, S; Simon, D K

    2011-05-01

    Freezing of gait (FOG) and falls are major sources of disability for Parkinson's disease (PD) patients, and show limited responsiveness to medications. We assessed the efficacy of visual cues for overcoming FOG in an open-label study of 26 patients with PD. The change in the frequency of falls was a secondary outcome measure. Subjects underwent a 1-2 month baseline period of use of a cane or walker without visual cues, followed by 1 month using the same device with the laserlight visual cue. The laserlight visual cue was associated with a modest but significant mean reduction in FOG Questionnaire (FOGQ) scores of 1.25 ± 0.48 (p = 0.0152, two-tailed paired t-test), representing a 6.6% improvement compared to the mean baseline FOGQ scores of 18.8. The mean reduction in fall frequency was 39.5 ± 9.3% with the laserlight visual cue among subjects experiencing at least one fall during the baseline and subsequent study periods (p = 0.002; two-tailed one-sample t-test with hypothesized mean of 0). Though some individual subjects may have benefited, the overall mean performance on the timed gait test (TGT) across all subjects did not significantly change. However, among the 4 subjects who underwent repeated testing of the TGT, one showed a 50% mean improvement in TGT performance with the laserlight visual cue (p = 0.005; two-tailed paired t-test). This open-label study provides evidence for modest efficacy of a laserlight visual cue in overcoming FOG and reducing falls in PD patients. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  1. Gaze-contingent reinforcement learning reveals incentive value of social signals in young children and adults.

    PubMed

    Vernetti, Angélina; Smith, Tim J; Senju, Atsushi

    2017-03-15

    While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue-reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue-reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. © 2017 The Authors.

  2. Contextual cueing impairment in patients with age-related macular degeneration.

    PubMed

    Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan

    2013-09-12

    Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.

  3. Using multisensory cues to facilitate air traffic management.

    PubMed

    Ngo, Mary K; Pierce, Russell S; Spence, Charles

    2012-12-01

    In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.

  4. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  5. Visual gate for brain-computer interfaces.

    PubMed

    Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H

    2009-01-01

    Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.

  6. The benefit of forgetting.

    PubMed

    Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F

    2013-04-01

    Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.

  7. Tactical decisions for changeable cuttlefish camouflage: visual cues for choosing masquerade are relevant from a greater distance than visual cues used for background matching.

    PubMed

    Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T

    2015-10-01

    Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.

  8. Neural Representation of Motion-In-Depth in Area MT

    PubMed Central

    Sanada, Takahisa M.

    2014-01-01

    Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481

  9. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  10. Determinants of structural choice in visually situated sentence production.

    PubMed

    Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph

    2012-11-01

    Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Selective maintenance in visual working memory does not require sustained visual attention.

    PubMed

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M

    2013-08-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved

  12. Dynamic cues for whisker-based object localization: An analytical solution to vibration during active whisker touch

    PubMed Central

    Vaxenburg, Roman; Wyche, Isis; Svoboda, Karel; Efros, Alexander L.

    2018-01-01

    Vibrations are important cues for tactile perception across species. Whisker-based sensation in mice is a powerful model system for investigating mechanisms of tactile perception. However, the role vibration plays in whisker-based sensation remains unsettled, in part due to difficulties in modeling the vibration of whiskers. Here, we develop an analytical approach to calculate the vibrations of whiskers striking objects. We use this approach to quantify vibration forces during active whisker touch at a range of locations along the whisker. The frequency and amplitude of vibrations evoked by contact are strongly dependent on the position of contact along the whisker. The magnitude of vibrational shear force and bending moment is comparable to quasi-static forces. The fundamental vibration frequencies are in a detectable range for mechanoreceptor properties and below the maximum spike rates of primary sensory afferents. These results suggest two dynamic cues exist that rodents can use for object localization: vibration frequency and comparison of vibrational to quasi-static force magnitude. These complement the use of quasi-static force angle as a distance cue, particularly for touches close to the follicle, where whiskers are stiff and force angles hardly change during touch. Our approach also provides a general solution to calculation of whisker vibrations in other sensing tasks. PMID:29584719

  13. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    PubMed

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  14. The Effect of Visual Cueing and Control Design on Children's Reading Achievement of Audio E-Books with Tablet Computers

    ERIC Educational Resources Information Center

    Wang, Pei-Yu; Huang, Chung-Kai

    2015-01-01

    This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…

  15. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    PubMed

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  16. Automaticity of phasic alertness: Evidence for a three-component model of visual cueing.

    PubMed

    Lin, Zhicheng; Lu, Zhong-Lin

    2016-10-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue-double cue-is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue-single cue-that is being mixed (80 % vs. 50 % valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, then, top-down influences-in the form of contextual relevance and cue awareness-can have opposite influences on the cueing effect from the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention-orienting, alerting, and inhibition-to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital.

  17. Neural substrates of smoking cue reactivity: A meta-analysis of fMRI studies

    PubMed Central

    Engelmann, Jeffrey M.; Versace, Francesco; Robinson, Jason D.; Minnix, Jennifer A.; Lam, Cho Y.; Cui, Yong; Brown, Victoria L.; Cinciripini, Paul M.

    2012-01-01

    Reactivity to smoking-related cues may be an important factor that precipitates relapse in smokers who are trying to quit. The neurobiology of smoking cue reactivity has been investigated in several fMRI studies. We combined the results of these studies using activation likelihood estimation, a meta-analytic technique for fMRI data. Results of the meta-analysis indicated that smoking cues reliably evoke larger fMRI responses than neutral cues in the extended visual system, precuneus, posterior cingulate gyrus, anterior cingulate gyrus, dorsal and medial prefrontal cortex, insula, and dorsal striatum. Subtraction meta-analyses revealed that parts of the extended visual system and dorsal prefrontal cortex are more reliably responsive to smoking cues in deprived smokers than in non-deprived smokers, and that short-duration cues presented in event-related designs produce larger responses in the extended visual system than long-duration cues presented in blocked designs. The areas that were found to be responsive to smoking cues agree with theories of the neurobiology of cue reactivity, with two exceptions. First, there was a reliable cue reactivity effect in the precuneus, which is not typically considered a brain region important to addiction. Second, we found no significant effect in the nucleus accumbens, an area that plays a critical role in addiction, but this effect may have been due to technical difficulties associated with measuring fMRI data in that region. The results of this meta-analysis suggest that the extended visual system should receive more attention in future studies of smoking cue reactivity. PMID:22206965

  18. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  19. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    ERIC Educational Resources Information Center

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  20. Preschoolers Benefit from Visually Salient Speech Cues

    ERIC Educational Resources Information Center

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  1. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  2. Age-related changes in human posture control: Sensory organization tests

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Black, F. O.

    1989-01-01

    Postural control was measured in 214 human subjects ranging in age from 7 to 81 years. Sensory organization tests measured the magnitude of anterior-posterior body sway during six 21 s trials in which visual and somatosensory orientation cues were altered (by rotating the visual surround and support surface in proportion to the subject's sway) or vision eliminated (eyes closed) in various combinations. No age-related increase in postural sway was found for subjects standing on a fixed support surface with eyes open or closed. However, age-related increases in sway were found for conditions involving altered visual or somatosensory cues. Subjects older than about 55 years showed the largest sway increases. Subjects younger than about 15 years were also sensitive to alteration of sensory cues. On average, the older subjects were more affected by altered visual cues whereas younger subjects had more difficulty with altered somatosensory cues.

  3. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    PubMed

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Developmental changes in attention to faces and bodies in static and dynamic scenes.

    PubMed

    Stoesz, Brenda M; Jakobson, Lorna S

    2014-01-01

    Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.

  5. The role of visuohaptic experience in visually perceived depth.

    PubMed

    Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S

    2009-06-01

    Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.

  6. Empirical comparison of a fixed-base and a moving-base simulation of a helicopter engaged in visually conducted slalom runs

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.

    1977-01-01

    Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.

  7. Social Vision: Functional Forecasting and the Integration of Compound Social Cues

    PubMed Central

    Adams, Reginald B.; Kveraga, Kestutis

    2017-01-01

    For decades the study of social perception was largely compartmentalized by type of social cue: race, gender, emotion, eye gaze, body language, facial expression etc. This was partly due to good scientific practice (e.g., controlling for extraneous variability), and partly due to assumptions that each type of social cue was functionally distinct from others. Herein, we present a functional forecast approach to understanding compound social cue processing that emphasizes the importance of shared social affordances across various cues (see too Adams, Franklin, Nelson, & Stevenson, 2010; Adams & Nelson, 2011; Weisbuch & Adams, 2012). We review the traditional theories of emotion and face processing that argued for dissociable and noninteracting pathways (e.g., for specific emotional expressions, gaze, identity cues), as well as more recent evidence for combinatorial processing of social cues. We argue here that early, and presumably reflexive, visual integration of such cues is necessary for adaptive behavioral responding to others. In support of this claim, we review contemporary work that reveals a flexible visual system, one that readily incorporates meaningful contextual influences in even nonsocial visual processing, thereby establishing the functional and neuroanatomical bases necessary for compound social cue integration. Finally, we explicate three likely mechanisms driving such integration. Together, this work implicates a role for cognitive penetrability in visual perceptual abilities that have often been (and in some cases still are) ascribed to direct encapsulated perceptual processes. PMID:29242738

  8. The effect of monocular and binocular viewing on the accommodation response to real targets in emmetropia and myopia.

    PubMed

    Seidel, Dirk; Gray, Lyle S; Heron, Gordon

    2005-04-01

    Decreased blur-sensitivity found in myopia has been linked with reduced accommodation responses and myopigenesis. Although the mechanism for myopia progression remains unclear, it is commonly known that myopic patients rarely report near visual symptoms and are generally very sensitive to small changes in their distance prescription. This experiment investigated the effect of monocular and binocular viewing on static and dynamic accommodation in emmetropes and myopes for real targets to monitor whether inaccuracies in the myopic accommodation response are maintained when a full set of visual cues, including size and disparity, is available. Monocular and binocular steady-state accommodation responses were measured with a Canon R1 autorefractor for target vergences ranging from 0-5 D in emmetropes (EMM), late-onset myopes (LOM), and early-onset myopes (EOM). Dynamic closed-loop accommodation responses for a stationary target at 0.25 m and step stimuli of two different magnitudes were recorded for both monocular and binocular viewing. All refractive groups showed similar accommodation stimulus response curves consistent with previously published data. Viewing a stationary near target monocularly, LOMs demonstrated slightly larger accommodation microfluctuations compared with EMMs and EOMs; however, this difference was absent under binocular viewing conditions. Dynamic accommodation step responses revealed significantly (p < 0.05) longer response times for the myopic subject groups for a number of step stimuli. No significant difference in either reaction time or the number of correct responses for a given number of step-vergence changes was found between the myopic groups and EMMs. When viewing real targets with size and disparity cues available, no significant differences in the accuracy of static and dynamic accommodation responses were found among EMM, EOM, and LOM. The results suggest that corrected myopes do not experience dioptric blur levels that are substantially different from emmetropes when they view free space targets.

  9. Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location

    PubMed Central

    Fiebelkorn, Ian C.; Saalmann, Yuri B.; Kastner, Sabine

    2013-01-01

    SUMMARY The brain directs its limited processing resources through various selection mechanisms, broadly referred to as attention. The present study investigated the temporal dynamics of two such selection mechanisms: space- and object-based selection. Previous evidence has demonstrated that preferential processing resulting from a spatial cue (i.e., space-based selection) spreads to uncued locations, if those locations are part of the same object (i.e., resulting in object-based selection). But little is known about the relationship between these fundamental selection mechanisms. Here, we used human behavioral data to determine how space- and object-based selection simultaneously evolve under conditions that promote sustained attention at a cued location, varying the cue-to-target interval from 300—1100 ms. We tracked visual-target detection at a cued location (i.e., space-based selection), at an uncued location that was part of the same object (i.e., object-based selection), and at an uncued location that was part of a different object (i.e., in the absence of space- and object-based selection). The data demonstrate that even under static conditions, there is a moment-to-moment reweighting of attentional priorities based on object properties. This reweighting is revealed through rhythmic patterns of visual-target detection both within (at 8 Hz) and between (at 4 Hz) objects. PMID:24316204

  10. Translation and Rotation Trade Off in Human Visual Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    We have previously shown that, during simulated curvilinear motion, humans can make reasonably accurate and precise heading judgments from optic flow without either oculomotor or static-depth cues about rotation. We now systematically investigate the effect of varying the parameters of self-motion. We visually simulated 400 ms of self-motion along curved paths (constant rotation and translation rates, fixed retinocentric heading) towards two planes of random dots at 10.3 m and 22.3 m at mid-trial. Retinocentric heading judgments of 4 observers (2 naive) were measured for 12 different combinations of translation (T between 4 and 16 m/s) and rotation (R either 8 or 16 deg/s). In the range tested, heading bias and uncertainty decrease quasilinearly with T/R, but the bias also appears to depend on R. If depth is held constant, the ratio T/R can account for much of the variation in the accuracy and precision of human visual heading estimation, although further experiments are needed to resolve whether absolute rotation rate, total flow rate, or some other factor can account for the observed -2 deg shift between the bias curves.

  11. Interaction of vestibular, echolocation, and visual modalities guiding flight by the big brown bat, Eptesicus fuscus.

    PubMed

    Horowitz, Seth S; Cheney, Cheryl A; Simmons, James A

    2004-01-01

    The big brown bat (Eptesicus fuscus) is an aerial-feeding insectivorous species that relies on echolocation to avoid obstacles and to detect flying insects. Spatial perception in the dark using echolocation challenges the vestibular system to function without substantial visual input for orientation. IR thermal video recordings show the complexity of bat flights in the field and suggest a highly dynamic role for the vestibular system in orientation and flight control. To examine this role, we carried out laboratory studies of flight behavior under illuminated and dark conditions in both static and rotating obstacle tests while administering heavy water (D2O) to impair vestibular inputs. Eptesicus carried out complex maneuvers through both fixed arrays of wires and a rotating obstacle array using both vision and echolocation, or when guided by echolocation alone. When treated with D2O in combination with lack of visual cues, bats showed considerable decrements in performance. These data indicate that big brown bats use both vision and echolocation to provide spatial registration for head position information generated by the vestibular system.

  12. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson’s Disease

    PubMed Central

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862

  13. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  14. Are multiple visual short-term memory storages necessary to explain the retro-cue effect?

    PubMed

    Makovski, Tal

    2012-06-01

    Recent research has shown that change detection performance is enhanced when, during the retention interval, attention is cued to the location of the upcoming test item. This retro-cue advantage has led some researchers to suggest that visual short-term memory (VSTM) is divided into a durable, limited-capacity storage and a more fragile, high-capacity storage. Consequently, performance is poor on the no-cue trials because fragile VSTM is overwritten by the test display and only durable VSTM is accessible under these conditions. In contrast, performance is improved in the retro-cue condition because attention keeps fragile VSTM accessible. The aim of the present study was to test the assumptions underlying this two-storage account. Participants were asked to encode an array of colors for a change detection task involving no-cue and retro-cue trials. A retro-cue advantage was found even when the cue was presented after a visual (Experiment 1) or a central (Experiment 2) interference. Furthermore, the magnitude of the interference was comparable between the no-cue and retro-cue trials. These data undermine the main empirical support for the two-storage account and suggest that the presence of a retro-cue benefit cannot be used to differentiate between different VSTM storages.

  15. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    PubMed

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Helicopter flight simulation motion platform requirements

    NASA Astrophysics Data System (ADS)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  17. Heuristics of Reasoning and Analogy in Children's Visual Perspective Taking.

    ERIC Educational Resources Information Center

    Yaniv, Ilan; Shatz, Marilyn

    1990-01-01

    In three experiments, children of three through six years of age were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight was salient. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed to objects facilitated…

  18. The Influence of Alertness on Spatial and Nonspatial Components of Visual Attention

    ERIC Educational Resources Information Center

    Matthias, Ellen; Bublak, Peter; Muller, Hermann J.; Schneider, Werner X.; Krummenacher, Joseph; Finke, Kathrin

    2010-01-01

    Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus…

  19. Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load

    ERIC Educational Resources Information Center

    Santangelo, Valerio; Spence, Charles

    2007-01-01

    We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…

  20. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  1. When they listen and when they watch: Pianists’ use of nonverbal audio and visual cues during duet performance

    PubMed Central

    Goebl, Werner

    2015-01-01

    Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610

  2. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  3. Forgotten but Not Gone: Retro-Cue Costs and Benefits in a Double-Cueing Paradigm Suggest Multiple States in Visual Short-Term Memory

    ERIC Educational Resources Information Center

    van Moorselaar, Dirk; Olivers, Christian N. L.; Theeuwes, Jan; Lamme, Victor A. F.; Sligte, Ilja G.

    2015-01-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM…

  4. Task-relevant information is prioritized in spatiotemporal contextual cueing.

    PubMed

    Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun

    2016-11-01

    Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.

  5. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  6. Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention

    PubMed Central

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.

    2012-01-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118

  7. Peripheral Visual Cues: Their Fate in Processing and Effects on Attention and Temporal-Order Perception.

    PubMed

    Tünnermann, Jan; Scharlau, Ingrid

    2016-01-01

    Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.

  8. Dissociating emotion-induced blindness and hypervision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-12-01

    Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.

  9. The disassociation of visual and acoustic conspecific cues decreases discrimination by female zebra finches (Taeniopygia guttata).

    PubMed

    Campbell, Dana L M; Hauber, Mark E

    2009-08-01

    Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.

  10. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences

    PubMed Central

    Parsons, Thomas D.

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target’s internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences. PMID:26696869

  11. Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences.

    PubMed

    Parsons, Thomas D

    2015-01-01

    An essential tension can be found between researchers interested in ecological validity and those concerned with maintaining experimental control. Research in the human neurosciences often involves the use of simple and static stimuli lacking many of the potentially important aspects of real world activities and interactions. While this research is valuable, there is a growing interest in the human neurosciences to use cues about target states in the real world via multimodal scenarios that involve visual, semantic, and prosodic information. These scenarios should include dynamic stimuli presented concurrently or serially in a manner that allows researchers to assess the integrative processes carried out by perceivers over time. Furthermore, there is growing interest in contextually embedded stimuli that can constrain participant interpretations of cues about a target's internal states. Virtual reality environments proffer assessment paradigms that combine the experimental control of laboratory measures with emotionally engaging background narratives to enhance affective experience and social interactions. The present review highlights the potential of virtual reality environments for enhanced ecological validity in the clinical, affective, and social neurosciences.

  12. Neural substrates of resisting craving during cigarette cue exposure.

    PubMed

    Brody, Arthur L; Mandelkern, Mark A; Olmstead, Richard E; Jou, Jennifer; Tiongson, Emmanuelle; Allen, Valerie; Scheibal, David; London, Edythe D; Monterosso, John R; Tiffany, Stephen T; Korb, Alex; Gan, Joanna J; Cohen, Mark S

    2007-09-15

    In cigarette smokers, the most commonly reported areas of brain activation during visual cigarette cue exposure are the prefrontal, anterior cingulate, and visual cortices. We sought to determine changes in brain activity in response to cigarette cues when smokers actively resist craving. Forty-two tobacco-dependent smokers underwent functional magnetic resonance imaging, during which they were presented with videotaped cues. Three cue presentation conditions were tested: cigarette cues with subjects allowing themselves to crave (cigarette cue crave), cigarette cues with the instruction to resist craving (cigarette cue resist), and matched neutral cues. Activation was found in the cigarette cue resist (compared with the cigarette cue crave) condition in the left dorsal anterior cingulate cortex (ACC), posterior cingulate cortex (PCC), and precuneus. Lower magnetic resonance signal for the cigarette cue resist condition was found in the cuneus bilaterally, left lateral occipital gyrus, and right postcentral gyrus. These relative activations and deactivations were more robust when the cigarette cue resist condition was compared with the neutral cue condition. Suppressing craving during cigarette cue exposure involves activation of limbic (and related) brain regions and deactivation of primary sensory and motor cortices.

  13. Response of hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo to visual and chemical cues arising from prey.

    PubMed

    Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M

    2009-01-01

    Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.

  14. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    PubMed Central

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2–3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences. PMID:24368903

  15. Visual spatial cue use for guiding orientation in two-to-three-year-old children.

    PubMed

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.

  16. A treat for the eyes. An eye-tracking study on children's attention to unhealthy and healthy food cues in media content.

    PubMed

    Spielvogel, Ines; Matthes, Jörg; Naderer, Brigitte; Karsay, Kathrin

    2018-06-01

    Based on cue reactivity theory, food cues embedded in media content can lead to physiological and psychological responses in children. Research suggests that unhealthy food cues are represented more extensively and interactively in children's media environments than healthy ones. However, it is not clear to this date whether children react differently to unhealthy compared to healthy food cues. In an experimental study with 56 children (55.4% girls; M age  = 8.00, SD = 1.58), we used eye-tracking to determine children's attention to unhealthy and healthy food cues embedded in a narrative cartoon movie. Besides varying the food type (i.e., healthy vs. unhealthy), we also manipulated the integration levels of food cues with characters (i.e., level of food integration; no interaction vs. handling vs. consumption), and we assessed children's individual susceptibility factors by measuring the impact of their hunger level. Our results indicated that unhealthy food cues attract children's visual attention to a larger extent than healthy cues. However, their initial visual interest did not differ between unhealthy and healthy food cues. Furthermore, an increase in the level of food integration led to an increase in visual attention. Our findings showed no moderating impact of hunger. We conclude that especially unhealthy food cues with an interactive connection trigger cue reactivity in children. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Validating Visual Cues In Flight Simulator Visual Displays

    NASA Astrophysics Data System (ADS)

    Aronson, Moses

    1987-09-01

    Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.

  18. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  20. The language used in describing autobiographical memories prompted by life period visually presented verbal cues, event-specific visually presented verbal cues and short musical clips of popular music.

    PubMed

    Zator, Krysten; Katz, Albert N

    2017-07-01

    Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.

  1. Effect of Visual Cues on the Resolution of Perceptual Ambiguity in Parkinson’s Disease and Normal Aging

    PubMed Central

    Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A.; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice

    2017-01-01

    Parkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. PMID:25765890

  2. Simon Effect with and without Awareness of the Accessory Stimulus

    ERIC Educational Resources Information Center

    Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena

    2006-01-01

    The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…

  3. Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors

    ERIC Educational Resources Information Center

    Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.

    2014-01-01

    With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…

  4. Atypical Visual Orienting to Gaze- and Arrow-Cues in Adults with High Functioning Autism

    ERIC Educational Resources Information Center

    Vlamings, Petra H. J. M.; Stauder, Johannes E. A.; van Son, Ilona A. M.; Mottron, Laurent

    2005-01-01

    The present study investigates visual orienting to directional cues (arrow or eyes) in adults with high functioning autism (n = 19) and age matched controls (n = 19). A choice reaction time paradigm is used in which eye-or arrow direction correctly (congruent) or incorrectly (incongruent) cues target location. In typically developing participants,…

  5. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  6. Automaticity of phasic alertness: evidence for a three-component model of visual cueing

    PubMed Central

    Lin, Zhicheng; Lu, Zhong-Lin

    2017-01-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue—double cue—is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue—single cue—that is being mixed (80% vs. 50% valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, top-down influences—in the form of contextual relevance and cue awareness—can have opposite influences on the cueing effect by the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention—orienting, alerting, and inhibition—to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital. PMID:27173487

  7. Memory for Drug Related Visual Stimuli in Young Adult, Cocaine Dependent Polydrug Users

    PubMed Central

    Ray, Suchismita; Pandina, Robert; Bates, Marsha E.

    2015-01-01

    Background and Objectives Implicit (unconscious) and explicit (conscious) memory associations with drugs have been examined primarily using verbal cues. However, drug seeking, drug use behaviors, and relapse in chronic cocaine and other drug users are frequently triggered by viewing substance related visual cues in the environment. We thus examined implicit and explicit memory for drug picture cues to understand the relative extent to which conscious and unconscious memory facilitation of visual drug cues occurs during cocaine dependence. Methods Memory for drug related and neutral picture cues was assessed in 14 inpatient cocaine dependent polydrug users and a comparison group of 21 young adults with limited drug experience (N = 35). Participants completed picture cue exposure, free recall and recognition tasks to assess explicit memory, and a repetition priming task to assess implicit memory. Results Drug cues, compared to neutral cues were better explicitly recalled and implicitly primed, and especially so in the cocaine group. In contrast, neutral cues were better explicitly recognized, and especially in the control group. Conclusion Certain forms of explicit and implicit memory for drug cues were enhanced in cocaine users compared to controls when memory was tested a short time following cue exposure. Enhanced unconscious memory processing of drug cues in chronic cocaine users may be a behavioral manifestation of heightened drug cue salience that supports drug seeking and taking. There may be value in expanding intervention techniques to utilize cocaine users’ implicit memory system. PMID:24588421

  8. Food and conspecific chemical cues modify visual behavior of zebrafish, Danio rerio.

    PubMed

    Stephenson, Jessica F; Partridge, Julian C; Whitlock, Kathleen E

    2012-06-01

    Animals use the different qualities of olfactory and visual sensory information to make decisions. Ethological and electrophysiological evidence suggests that there is cross-modal priming between these sensory systems in fish. We present the first experimental study showing that ecologically relevant chemical mixtures alter visual behavior, using adult male and female zebrafish, Danio rerio. Neutral-density filters were used to attenuate the light reaching the tank to an initial light intensity of 2.3×10(16) photons/s/m2. Fish were exposed to food cue and to alarm cue. The light intensity was then increased by the removal of one layer of filter (nominal absorbance 0.3) every minute until, after 10 minutes, the light level was 15.5×10(16) photons/s/m2. Adult male and female zebrafish responded to a moving visual stimulus at lower light levels if they had been first exposed to food cue, or to conspecific alarm cue. These results suggest the need for more integrative studies of sensory biology.

  9. Visuo-vestibular interaction: predicting the position of a visual target during passive body rotation.

    PubMed

    Mackrous, I; Simoneau, M

    2011-11-10

    Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  11. Gaze-contingent reinforcement learning reveals incentive value of social signals in young children and adults

    PubMed Central

    Smith, Tim J.; Senju, Atsushi

    2017-01-01

    While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue–reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue–reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. PMID:28250186

  12. Working memory enhances visual perception: evidence from signal detection analysis.

    PubMed

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W

    2010-03-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.

  13. The association between reading abilities and visual-spatial attention in Hong Kong Chinese children.

    PubMed

    Liu, Sisi; Liu, Duo; Pan, Zhihui; Xu, Zhengye

    2018-03-25

    A growing body of research suggests that visual-spatial attention is important for reading achievement. However, few studies have been conducted in non-alphabetic orthographies. This study extended the current research to reading development in Chinese, a logographic writing system known for its visual complexity. Eighty Hong Kong Chinese children were selected and divided into poor reader and typical reader groups, based on their performance on the measures of reading fluency, Chinese character reading, and reading comprehension. The poor and typical readers were matched on age and nonverbal intelligence. A Posner's spatial cueing task was adopted to measure the exogenous and endogenous orienting of visual-spatial attention. Although the typical readers showed the cueing effect in the central cue condition (i.e., responses to targets following valid cues were faster than those to targets following invalid cues), the poor readers did not respond differently in valid and invalid conditions, suggesting an impairment of the endogenous orienting of attention. The two groups, however, showed a similar cueing effect in the peripheral cue condition, indicating intact exogenous orienting in the poor readers. These findings generally supported a link between the orienting of covert attention and Chinese reading, providing evidence for the attentional-deficit theory of dyslexia. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Time-resolved neuroimaging of visual short term memory consolidation by post-perceptual attention shifts.

    PubMed

    Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan

    2016-01-15

    Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Motivation and short-term memory in visual search: Attention's accelerator revisited.

    PubMed

    Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton

    2018-05-01

    A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Rebalancing Spatial Attention: Endogenous Orienting May Partially Overcome the Left Visual Field Bias in Rapid Serial Visual Presentation.

    PubMed

    Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf

    2017-01-01

    In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.

  17. A systematic comparison between visual cues for boundary detection.

    PubMed

    Mély, David A; Kim, Junkyung; McGill, Mason; Guo, Yuliang; Serre, Thomas

    2016-03-01

    The detection of object boundaries is a critical first step for many visual processing tasks. Multiple cues (we consider luminance, color, motion and binocular disparity) available in the early visual system may signal object boundaries but little is known about their relative diagnosticity and how to optimally combine them for boundary detection. This study thus aims at understanding how early visual processes inform boundary detection in natural scenes. We collected color binocular video sequences of natural scenes to construct a video database. Each scene was annotated with two full sets of ground-truth contours (one set limited to object boundaries and another set which included all edges). We implemented an integrated computational model of early vision that spans all considered cues, and then assessed their diagnosticity by training machine learning classifiers on individual channels. Color and luminance were found to be most diagnostic while stereo and motion were least. Combining all cues yielded a significant improvement in accuracy beyond that of any cue in isolation. Furthermore, the accuracy of individual cues was found to be a poor predictor of their unique contribution for the combination. This result suggested a complex interaction between cues, which we further quantified using regularization techniques. Our systematic assessment of the accuracy of early vision models for boundary detection together with the resulting annotated video dataset should provide a useful benchmark towards the development of higher-level models of visual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  19. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  20. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  1. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  2. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    PubMed

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  3. The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.

    PubMed

    English, Brittney A; Howard, Ayanna M

    2017-07-01

    In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.

  4. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    PubMed

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  5. Evaluation of Static vs. Dynamic Visualizations for Engineering Technology Students and Implications on Spatial Visualization Ability: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Dickerson, Daniel; Jovanovic, Vukica; Jones, Mildred

    2015-01-01

    The benefit of using static versus dynamic visualizations is a controversial one. Few studies have explored the effectiveness of static visualizations to those of dynamic visualizations, and the current state of the literature remains somewhat unclear. During the last decade there has been a lengthy debate about the opportunities for using…

  6. Visual landmarks facilitate rodent spatial navigation in virtual reality environments

    PubMed Central

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484

  7. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  8. Orienting attention within visual short-term memory: development and mechanisms.

    PubMed

    Shimi, Andria; Nobre, Anna C; Astle, Duncan; Scerif, Gaia

    2014-01-01

    How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to encoding or during maintenance. Cues improved memory regardless of their position, but younger children benefited less from cues presented during maintenance, and these benefits related to VSTM span over and above basic memory in uncued trials. In Experiment 2, cues of low validity eliminated benefits, suggesting that even the youngest children use cues voluntarily, rather than automatically. These findings elucidate the close coupling between developing visuospatial attentional control and VSTM. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  9. Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.

    PubMed

    Liverence, Brandon M; Scholl, Brian J

    2015-07-01

    Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.

  10. Motor excitability during visual perception of known and unknown spoken languages.

    PubMed

    Swaminathan, Swathi; MacSweeney, Mairéad; Boyles, Rowan; Waters, Dafydd; Watkins, Kate E; Möttönen, Riikka

    2013-07-01

    It is possible to comprehend speech and discriminate languages by viewing a speaker's articulatory movements. Transcranial magnetic stimulation studies have shown that viewing speech enhances excitability in the articulatory motor cortex. Here, we investigated the specificity of this enhanced motor excitability in native and non-native speakers of English. Both groups were able to discriminate between speech movements related to a known (i.e., English) and unknown (i.e., Hebrew) language. The motor excitability was higher during observation of a known language than an unknown language or non-speech mouth movements, suggesting that motor resonance is enhanced specifically during observation of mouth movements that convey linguistic information. Surprisingly, however, the excitability was equally high during observation of a static face. Moreover, the motor excitability did not differ between native and non-native speakers. These findings suggest that the articulatory motor cortex processes several kinds of visual cues during speech communication. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  11. Multimodal retrieval of autobiographical memories: sensory information contributes differently to the recollection of events.

    PubMed

    Willander, Johan; Sikström, Sverker; Karlsson, Kristina

    2015-01-01

    Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.

  12. Are Distal and Proximal Visual Cues Equally Important during Spatial Learning in Mice? A Pilot Study of Overshadowing in the Spatial Domain

    PubMed Central

    Hébert, Marie; Bulla, Jan; Vivien, Denis; Agin, Véronique

    2017-01-01

    Animals use distal and proximal visual cues to accurately navigate in their environment, with the possibility of the occurrence of associative mechanisms such as cue competition as previously reported in honey-bees, rats, birds and humans. In this pilot study, we investigated one of the most common forms of cue competition, namely the overshadowing effect, between visual landmarks during spatial learning in mice. To this end, C57BL/6J × Sv129 mice were given a two-trial place recognition task in a T-maze, based on a novelty free-choice exploration paradigm previously developed to study spatial memory in rodents. As this procedure implies the use of different aspects of the environment to navigate (i.e., mice can perceive from each arm of the maze), we manipulated the distal and proximal visual landmarks during both the acquisition and retrieval phases. Our prospective findings provide a first set of clues in favor of the occurrence of an overshadowing between visual cues during a spatial learning task in mice when both types of cues are of the same modality but at varying distances from the goal. In addition, the observed overshadowing seems to be non-reciprocal, as distal visual cues tend to overshadow the proximal ones when competition occurs, but not vice versa. The results of the present study offer a first insight about the occurrence of associative mechanisms during spatial learning in mice, and may open the way to promising new investigations in this area of research. Furthermore, the methodology used in this study brings a new, useful and easy-to-use tool for the investigation of perceptive, cognitive and/or attentional deficits in rodents. PMID:28634446

  13. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  14. Effects of visual cues of object density on perception and anticipatory control of dexterous manipulation.

    PubMed

    Crajé, Céline; Santello, Marco; Gordon, Andrew M

    2013-01-01

    Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

  15. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  16. Effects of Tactile, Visual, and Auditory Cues About Threat Location on Target Acquisition and Attention to Visual and Auditory Communications

    DTIC Science & Technology

    2006-08-01

    Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition

  17. Enhancing visual search abilities of people with intellectual disabilities.

    PubMed

    Li-Tsang, Cecilia W P; Wong, Jackson K K

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.

  18. Flight simulator with spaced visuals

    NASA Technical Reports Server (NTRS)

    Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)

    1980-01-01

    A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.

  19. Working memory load and the retro-cue effect: A diffusion model account.

    PubMed

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Cue-recruitment for extrinsic signals after training with low information stimuli.

    PubMed

    Jain, Anshul; Fuller, Stuart; Backus, Benjamin T

    2014-01-01

    Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.

  1. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss.

    PubMed

    Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly

    2017-08-16

    This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

  2. Signal enhancement, not active suppression, follows the contingent capture of visual attention.

    PubMed

    Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J

    2017-02-01

    Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. What do we perceive from motion pictures? A computational account.

    PubMed

    Cheong, Loong-Fah; Xiang, Xu

    2007-06-01

    Cinema viewed from a location other than a canonical viewing point (CVP) presents distortions to the viewer in both its static and its dynamic aspects. Past works have investigated mainly the static aspect of this problem and attempted to explain why viewers still seem to perceive the scene very well. The dynamic aspect of depth perception, which is known as structure from motion, and its possible distortion, have not been well investigated. We derive the dynamic depth cues perceived by the viewer and use the so-called isodistortion framework to understand its distortion. The result is that viewers seated at a reasonably central position experience a shift in the intrinsic parameters of their visual systems. Despite this shift, the key properties of the perceived depths remain largely the same, being determined in the main by the accuracy to which extrinsic motion parameters can be recovered. For a viewer seated at a noncentral position and watching the movie screen at a slant angle, the view is related to the view at the CVP by a homography, resulting in various aberrations such as noncentral projection.

  4. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements

    PubMed Central

    Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512

  5. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  7. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  8. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Toward a New Theory for Selecting Instructional Visuals.

    ERIC Educational Resources Information Center

    Croft, Richard S.; Burton, John K.

    This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…

  10. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    ERIC Educational Resources Information Center

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  11. Design Criteria for Visual Cues Used in Disruptive Learning Interventions within Sustainability Education

    ERIC Educational Resources Information Center

    Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão

    2017-01-01

    This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…

  12. Comparison of the visual perception of a runway model in pilots and nonpilots during simulated night landing approaches.

    DOT National Transportation Integrated Search

    1978-03-01

    At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...

  13. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  14. Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.

    PubMed

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2014-04-01

    Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    PubMed Central

    Blanchfield, Anthony; Hardy, James; Marcora, Samuele

    2014-01-01

    The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014

  16. 'You see?' Teaching and learning how to interpret visual cues during surgery.

    PubMed

    Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei

    2015-11-01

    The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.

  17. The role of lower peripheral visual cues in the visuomotor coordination of locomotion and prehension.

    PubMed

    Graci, Valentina

    2011-10-01

    It has been previously suggested that coupled upper and limb movements need visuomotor coordination to be achieved. Previous studies have not investigated the role that visual cues may play in the coordination of locomotion and prehension. The aim of this study was to investigate if lower peripheral visual cues provide online control of the coordination of locomotion and prehension as they have been showed to do during adaptive gait and level walking. Twelve subjects reached a semi-empty or a full glass with their dominant or non-dominant hand at gait termination. Two binocular visual conditions were investigated: normal vision and lower visual occlusion. Outcome measures were determined using 3D motion capture techniques. Results showed that although the subjects were able to successfully complete the task without spilling the water from the glass under lower visual occlusion, they increased the margin of safety between final foot placements and glass. These findings suggest that lower visual cues are mainly used online to fine tune the trajectory of the upper and lower limbs moving toward the target. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  19. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  20. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  1. The redbay ambrosia beetle (Coleoptera: Curculionidae: Scolytinae) uses stem silhouette diameter as a visual host-finding cue

    Treesearch

    Albert (Bud) Mayfield; Cavell Brownie

    2013-01-01

    The redbay ambrosia beetle (Syleborus glabratus Eichhoff) is an invasive pest and vector of the pathogen that causes laurel wilt disease in Lauraceous tree species in the eastern United States. This insect uses olfactory cues during host finding, but use of visual cues by X. Glabratus has not been previously investigated and may help explain diameter...

  2. All I saw was the cake. Hunger effects on attentional capture by visual food cues.

    PubMed

    Piech, Richard M; Pastorino, Michael T; Zald, David H

    2010-06-01

    While effects of hunger on motivation and food reward value are well-established, far less is known about the effects of hunger on cognitive processes. Here, we deployed the emotional blink of attention paradigm to investigate the impact of visual food cues on attentional capture under conditions of hunger and satiety. Participants were asked to detect targets which appeared in a rapid visual stream after different types of task irrelevant distractors. We observed that food stimuli acquired increased power to capture attention and prevent target detection when participants were hungry. This occurred despite monetary incentives to perform well. Our findings suggest an attentional mechanism through which hunger heightens perception of food cues. As an objective behavioral marker of the attentional sensitivity to food cues, the emotional attentional blink paradigm may provide a useful technique for studying individual differences, and state manipulations in the sensitivity to food cues. Published by Elsevier Ltd.

  3. Cues used by the black fly, Simulium annulus, for attraction to the common loon (Gavia immer).

    PubMed

    Weinandt, Meggin L; Meyer, Michael; Strand, Mac; Lindsay, Alec R

    2012-12-01

    The parasitic relationship between a black fly, Simulium annulus, and the common loon (Gavia immer) has been considered one of the most exclusive relationships between any host species and a black fly species. To test the host specificity of this blood-feeding insect, we made a series of bird decoy presentations to black flies on loon-inhabited lakes in northern Wisconsin, U.S.A. To examine the importance of chemical and visual cues for black fly detection of and attraction to hosts, we made decoy presentations with and without chemical cues. Flies attracted to the decoys were collected, identified to species, and quantified. Results showed that S. annulus had a strong preference for common loon visual and chemical cues, although visual cues from Canada geese (Branta canadensis) and mallards (Anas platyrynchos) did attract some flies in significantly smaller numbers. © 2012 The Society for Vector Ecology.

  4. Working memory can enhance unconscious visual perception.

    PubMed

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  5. The Effect of Eye Contact Is Contingent on Visual Awareness

    PubMed Central

    Xu, Shan; Zhang, Shen; Geng, Haiyan

    2018-01-01

    The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1) or subliminal (Experiment 2) eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition. PMID:29467703

  6. Shape Perception and Navigation in Blind Adults

    PubMed Central

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  7. Food Avoidance Learning in Squirrel Monkeys and Common Marmosets

    PubMed Central

    Laska, Matthias; Metzker, Karin

    1998-01-01

    Using a conditioned food avoidance learning paradigm, six squirrel monkeys (Saimiri sciureus) and six common marmosets (Callithrix jacchus) were tested for their ability to (1) reliably form associations between visual or olfactory cues of a potential food and its palatability and (2) remember such associations over prolonged periods of time. We found (1) that at the group level both species showed one-trial learning with the visual cues color and shape, whereas only the marmosets were able to do so with the olfactory cue, (2) that all individuals from both species learned to reliably avoid the unpalatable food items within 10 trials, (3) a tendency in both species for quicker acquisition of the association with the visual cues compared with the olfactory cue, (4) a tendency for quicker acquisition and higher reliability of the aversion by the marmosets compared with the squirrel monkeys, and (5) that all individuals from both species were able to reliably remember the significance of the visual cues, color and shape, even after 4 months, whereas only the marmosets showed retention of the significance of the olfactory cues for up to 4 weeks. Furthermore, the results suggest that in both species tested, illness is not a necessary prerequisite for food avoidance learning but that the presumably innate rejection responses toward highly concentrated but nontoxic bitter and sour tastants are sufficient to induce robust learning and retention. PMID:10454364

  8. Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.

    PubMed

    Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R

    2014-01-01

    Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.

  9. The role of visual representation in physics learning: dynamic versus static visualization

    NASA Astrophysics Data System (ADS)

    Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini

    2017-11-01

    This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p <0.05) in both experimental and control classes. The averages of students’ learning outcomes who are using dynamic visualization media are significantly higher than the class that obtains learning by using static visualization media. It can be seen from the characteristics of visual representation; each visualization provides different understanding support for the students. Dynamic visual media is more suitable for explaining material related to movement or describing a process, whereas static visual media is appropriately used for non-moving physical phenomena and requires long-term observation.

  10. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    PubMed Central

    Lupyan, Gary; Spivey, Michael J.

    2010-01-01

    Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646

  11. Integration of visual and motion cues for flight simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.

  12. Smell or vision? The use of different sensory modalities in predator discrimination.

    PubMed

    Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara

    2017-01-01

    Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.

  13. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  14. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  15. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  16. AH-64 IHADSS aviator vision experiences in Operation Iraqi Freedom

    NASA Astrophysics Data System (ADS)

    Hiatt, Keith L.; Rash, Clarence E.; Harris, Eric S.; McGilberry, William H.

    2004-09-01

    Forty AH-64 Apache aviators representing a total of 8564 flight hours and 2260 combat hours during Operation Iraqi Freedom and its aftermath were surveyed for their visual experiences with the AH-64's monocular Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display in a combat environment. A major objective of this study was to determine if the frequencies of reports of visual complaints and illusions reported in the previous studies, addressing mostly benign training environments, differ in the more stressful combat environments. The most frequently reported visual complaints, both while and after flying, were visual discomfort and headache, which is consistent with previous studies. Frequencies of complaints after flying in the current study were numerically lower for all complaint types, but differences from previous studies are statistically significant only for visual discomfort and disorientation (vertigo). With the exception of "brownout/whiteout," reports of degraded visual cues in the current study were numerically lower for all types, but statistically significant only for impaired depth perception, decreased field of view, and inadvertent instrumental meteorological conditions. This study also found statistically lower reports of all static and dynamic illusions (with one exception, disorientation). This important finding is attributed to the generally flat and featureless geography present in a large portion of the Iraqi theater and to the shift in the way that the aviators use the two disparate visual inputs presented by the IHADSS monocular design (i.e., greater use of both eyes as opposed to concentrating primarily on display imagery).

  17. Distraction Effects of Smoking Cues in Antismoking Messages: Examining Resource Allocation to Message Processing as a Function of Smoking Cues and Argument Strength

    PubMed Central

    Lee, Sungkyoung; Cappella, Joseph N.

    2014-01-01

    Findings from previous studies on smoking cues and argument strength in antismoking messages have shown that the presence of smoking cues undermines the persuasiveness of antismoking public service announcements (PSAs) with weak arguments. This study conceptualized smoking cues (i.e., scenes showing smoking-related objects and behaviors) as stimuli motivationally relevant to the former smoker population and examined how smoking cues influence former smokers’ processing of antismoking PSAs. Specifically, by defining smoking cues and the strength of antismoking arguments in terms of resource allocation, this study examined former smokers’ recognition accuracy, memory strength, and memory judgment of visual (i.e., scenes excluding smoking cues) and audio information from antismoking PSAs. In line with previous findings, the results of the study showed that the presence of smoking cues undermined former smokers’ encoding of antismoking arguments, which includes the visual and audio information that compose the main content of antismoking messages. PMID:25477766

  18. Developments in the Use of Proximity and Ratio Cues in Velocity Judgments.

    ERIC Educational Resources Information Center

    Shire, Beatrice; Durkin, Kevin

    Young children's responses to a velocity inference task based on static pictorial stimuli giving cues of proximity and ratio were examined. Subjects (N=65) in preschool through second grade viewed pictures of snails moving horizontally or spiders suspended vertically and were asked to estimate which competitor would reach its destination first.…

  19. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  20. Do cattle (Bos taurus) retain an association of a visual cue with a food reward for a year?

    PubMed

    Hirata, Masahiko; Takeno, Nozomi

    2014-06-01

    Use of visual cues to locate specific food resources from a distance is a critical ability of animals foraging in a spatially heterogeneous environment. However, relatively little is known about how long animals can retain the learned cue-reward association without reinforcement. We compared feeding behavior of experienced and naive Japanese Black cows (Bos taurus) in discovering food locations in a pasture. Experienced animals had been trained to respond to a visual cue (plastic washtub) for a preferred food (grain-based concentrate) 1 year prior to the experiment, while naive animals had no exposure to the cue. Cows were tested individually in a test arena including tubs filled with the concentrate on three successive days (Days 1-3). Experienced cows located the first tub more quickly and visited more tubs than naive cows on Day 1 (usually P < 0.05), but these differences disappeared on Days 2 and 3. The performance of experienced cows tended to increase from Day 1 to Day 2 and level off thereafter. Our results suggest that Japanese Black cows can associate a visual cue with a food reward within a day and retain the association for 1 year despite a slight decay. © 2014 Japanese Society of Animal Science.

  1. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  2. Object based implicit contextual learning: a study of eye movements.

    PubMed

    van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel

    2011-02-01

    Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.

  3. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

    PubMed Central

    Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly

    2017-01-01

    Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550

  4. Tailored information for cancer patients on the Internet: effects of visual cues and language complexity on information recall and satisfaction.

    PubMed

    van Weert, Julia C M; van Noort, Guda; Bol, Nadine; van Dijk, Liset; Tates, Kiek; Jansen, Jesse

    2011-09-01

    This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. An experiment using a 2 (complex vs. non-complex language)×3 (text only vs. photograph vs. drawing) factorial design was conducted. In total, 200 respondents without cancer were exposed to one of the six conditions. Respondents were more satisfied with the comprehensibility of both websites when they were presented with a visual cue. A significant interaction effect was found between language complexity and photograph use such that satisfaction with comprehensibility improved when a photograph was added to the complex language condition. Next, an interaction effect was found between age and satisfaction, which indicates that adding a visual cue is more important for older adults than younger adults. Finally, respondents who were exposed to a website with less complex language showed higher recall scores. The use of visual cues enhances satisfaction with the information presented on the website, and the use of non-complex language improves recall. The results of the current study can be used to improve computer-based information systems for patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Working memory dependence of spatial contextual cueing for visual search.

    PubMed

    Pollmann, Stefan

    2018-05-10

    When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.

  6. Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks.

    PubMed

    Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2012-10-25

    In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.

  7. Psychological diversity in chimpanzees and humans: new longitudinal assessments of chimpanzees' understanding of attention.

    PubMed

    Povinelli, Daniel J; Dunphy-Lelii, Sarah; Reaux, James E; Mazza, Michael P

    2002-01-01

    We present the results of 5 experiments that assessed 7 chimpanzees' understanding of the visual experiences of others. The research was conducted when the animals were adolescents (7-8 years of age) and adults (12 years of age). The experiments examined their ability to recognize the equivalence between visual and tactile modes of gaining the attention of others (Exp. 1), their understanding that the vision of others can be impeded by opaque barriers (Exps. 2 and 5), and their ability to distinguish between postural cues which are and are not specifically relevant to visual attention (Exps. 3 and 4). The results suggest that although chimpanzees are excellent at exploiting the observable contingencies that exist between the facial and bodily postures of other agents on the one hand, and events in the world on the other, these animals may not construe others as possessing psychological states related to 'seeing' or 'attention.' Humans and chimpanzees share homologous suites of psychological systems that detect and process information about both the static and dynamic aspects of social life, but humans alone may possess systems which interpret behavior in terms of abstract, unobservable mental states such as seeing and attention. Copyright 2002 S. Karger AG, Basel

  8. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  9. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  10. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson's disease.

    PubMed

    Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A

    2016-06-01

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.

  11. Floral reward, advertisement and attractiveness to honey bees in dioecious Salix caprea.

    PubMed

    Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor

    2014-01-01

    In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants.

  12. Field Assessment of the Predation Risk - Food Availability Trade-Off in Crab Megalopae Settlement

    PubMed Central

    Tapia-Lewin, Sebastián; Pardo, Luis Miguel

    2014-01-01

    Settlement is a key process for meroplanktonic organisms as it determines distribution of adult populations. Starvation and predation are two of the main mortality causes during this period; therefore, settlement tends to be optimized in microhabitats with high food availability and low predator density. Furthermore, brachyuran megalopae actively select favorable habitats for settlement, via chemical, visual and/or tactile cues. The main objective in this study was to assess the settlement of Metacarcinus edwardsii and Cancer plebejus under different combinations of food availability levels and predator presence. We determined, in the field, which factor is of greater relative importance when choosing a suitable microhabitat for settling. Passive larval collectors were deployed, crossing different scenarios of food availability and predator presence. We also explore if megalopae actively choose predator-free substrates in response to visual and/or chemical cues. We tested the response to combined visual and chemical cues and to each individually. Data was tested using a two-way factorial design ANOVA. In both species, food did not cause significant effect on settlement success, but predator presence did, therefore there was not trade-off in this case and megalopae respond strongly to predation risk by active aversion. Larvae of M. edwardsii responded to chemical and visual cues simultaneously, but there was no response to either cue by itself. Statistically, C. plebejus did not exhibit a differential response to cues, but reacted with a strong similar tendency as M. edwardsii. We concluded that crab megalopae actively select predator-free microhabitat, independently of food availability, using chemical and visual cues combined. The findings in this study highlight the great relevance of predation on the settlement process and recruitment of marine invertebrates with complex life cycles. PMID:24748151

  13. Floral Reward, Advertisement and Attractiveness to Honey Bees in Dioecious Salix caprea

    PubMed Central

    Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor

    2014-01-01

    In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants. PMID:24676333

  14. Visual cues that are effective for contextual saccade adaptation

    PubMed Central

    Azadi, Reza

    2014-01-01

    The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429

  15. Retro-dimension-cue benefit in visual working memory.

    PubMed

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-10-24

    In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.

  16. Retro-dimension-cue benefit in visual working memory

    PubMed Central

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-01-01

    In visual working memory (VWM) tasks, participants’ performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants’ performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis. PMID:27774983

  17. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  18. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  19. Retrospective attention enhances visual working memory in the young but not the old: an ERP study

    PubMed Central

    Duarte, Audrey; Hearons, Patricia; Jiang, Yashu; Delvin, Mary Courtney; Newsome, Rachel N.; Verhaeghen, Paul

    2013-01-01

    Behavioral evidence from the young suggests spatial cues that orient attention toward task relevant items in visual working memory (VWM) enhance memory capacity. Whether older adults can also use retrospective cues (“retro-cues”) to enhance VWM capacity is unknown. In the current event-related potential (ERP) study, young and old adults performed a VWM task in which spatially informative retro-cues were presented during maintenance. Young but not older adults’ VWM capacity benefitted from retro-cueing. The contralateral delay activity (CDA) ERP index of VWM maintenance was attenuated after the retro-cue, which effectively reduced the impact of memory load. CDA amplitudes were reduced prior to retro-cue onset in the old only. Despite a preserved ability to delete items from VWM, older adults may be less able to use retrospective attention to enhance memory capacity when expectancy of impending spatial cues disrupts effective VWM maintenance. PMID:23445536

  20. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Specific and nonspecific neural activity during selective processing of visual representations in working memory.

    PubMed

    Oh, Hwamee; Leung, Hoi-Chung

    2010-02-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.

  2. Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2018-04-25

    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.

  3. Visual cues for data mining

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  4. Contextual cueing: implicit learning and memory of visual context guides spatial attention.

    PubMed

    Chun, M M; Jiang, Y

    1998-06-01

    Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.

  5. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  6. Understanding Clinicians' Use of Cues When Assessing the Future Risk of Violence: A Clinical Judgement Analysis in the Psychiatric Setting.

    PubMed

    Brown, Barbara; Rakow, Tim

    2016-01-01

    Research is sparse on how clinicians' judgement informs their violence risk assessments. Yet, determining preferences for which risk factors are used, and how they are weighted and combined, is important to understanding such assessments. This study investigated clinicians' use of static and dynamic cues when assessing risk in individual patients and for dynamic cues considered in the recent and distant past. Clinicians provided three violence risk assessments for 41 separate hypothetical cases of hospitalized patients, each defined by eight cues (e.g., psychopathy and past violence severity/frequency). A clinical judgement analysis, using regression analysis of judgements for multiple cases, created linear models reflecting the major influences on each individual clinician's judgement. Risk assessments could be successfully predicted by between one and four cues, and there was close agreement between different clinicians' models regarding which cues were relevant for a given assessment. However, which cues were used varied between assessments: history of recent violence predicted assessments of in-hospital risk, whereas violence in the distant past predicted the assessed risk in the community. Crucially, several factors included in actuarial/structured risk assessment tools had little influence on clinicians' assessments. Our findings point to the adaptivity in clinicians' violence risk assessments, with a preference for relying on information consistent with the setting for which the assessment applies. The implication is that clinicians are open to using different structured assessment tools for different kinds of risk assessment, although they may seek greater flexibility in their assessments than some structured risk assessment tools afford (e.g., discounting static risk factors). Across three separate violence risk assessments, clinicians' risk assessments were more strongly influenced by dynamic cues that can vary over time (e.g., level of violence) than by static cues that are fixed for a given individual (e.g., a diagnosis of psychopathy). The variation in the factors affecting risk assessments for different settings (i.e., in hospital versus in the community) was greater than the variability between clinicians for such judgements. The findings imply a preference for risk assessment strategies that offer flexibility: either using different risk assessment tools for different purposes and settings or employing a single tool that allows for different inputs into the risk assessment depending upon the nature of the assessment. The appropriateness of these clinical intuitions about violence risk that are implied by our findings warrants further investigation. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Two (or three) is one too many: testing the flexibility of contextual cueing with multiple target locations.

    PubMed

    Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J

    2011-10-01

    Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.

  8. Perceiving Prosody from the Face and Voice: Distinguishing Statements from Echoic Questions in English.

    ERIC Educational Resources Information Center

    Srinivasan, Ravindra J.; Massaro, Dominic W.

    2003-01-01

    Examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. Found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. (Author/VWL)

  9. Vestibular-visual interactions in flight simulators

    NASA Technical Reports Server (NTRS)

    Clark, B.

    1977-01-01

    The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.

  10. Cue-induced brain activity in pathological gamblers.

    PubMed

    Crockford, David N; Goodyear, Bradley; Edwards, Jodi; Quickfall, Jeremy; el-Guebaly, Nady

    2005-11-15

    Previous studies using functional magnetic resonance imaging (fMRI) have identified differential brain activity in healthy subjects performing gambling tasks and in pathological gambling (PG) subjects when exposed to motivational and emotional predecessors for gambling as well as during gambling or response inhibition tasks. The goal of the present study was to determine if PG subjects exhibit differential brain activity when exposed to visual gambling cues. Ten male DSM-IV-TR PG subjects and 10 matched healthy control subjects underwent fMRI during visual presentations of gambling-related video alternating with video of nature scenes. Pathological gambling subjects and control subjects exhibited overlap in areas of brain activity in response to the visual gambling cues; however, compared with control subjects, PG subjects exhibited significantly greater activity in the right dorsolateral prefrontal cortex (DLPFC), including the inferior and medial frontal gyri, the right parahippocampal gyrus, and left occipital cortex, including the fusiform gyrus. Pathological gambling subjects also reported a significant increase in mean craving for gambling after the study. Post hoc analyses revealed a dissociation in visual processing stream (dorsal vs. ventral) activation by subject group and cue type. These findings may represent a component of cue-induced craving for gambling or conditioned behavior that could underlie pathological gambling.

  11. Depth reversals in stereoscopic displays driven by apparent size

    NASA Astrophysics Data System (ADS)

    Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.

    1998-04-01

    In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.

  12. Sexual selection in the squirrel treefrog Hyla squirella: the role of multimodal cue assessment in female choice

    USGS Publications Warehouse

    Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.

    2007-01-01

    Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.

  13. Modeling human perception and estimation of kinematic responses during aircraft landing

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Silk, Anthony B.

    1988-01-01

    The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.

  14. Age-related changes in event-cued visual and auditory prospective memory proper.

    PubMed

    Uttl, Bob

    2006-06-01

    We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.

  15. Deployment of spatial attention to words in central and peripheral vision.

    PubMed

    Ducrot, Stéphanie; Grainger, Jonathan

    2007-05-01

    Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.

  16. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  17. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.

    PubMed

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-09-01

    At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.

  18. What You Don't Notice Can Harm You: Age-Related Differences in Detecting Concurrent Visual, Auditory, and Tactile Cues.

    PubMed

    Pitts, Brandon J; Sarter, Nadine

    2018-06-01

    Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.

  19. Motor (but not auditory) attention affects syntactic choice.

    PubMed

    Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy

    2018-01-01

    Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.

  20. Neural coding underlying the cue preference for celestial orientation

    PubMed Central

    el Jundi, Basil; Warrant, Eric J.; Byrne, Marcus J.; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie

    2015-01-01

    Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity. PMID:26305929

  1. Neural coding underlying the cue preference for celestial orientation.

    PubMed

    el Jundi, Basil; Warrant, Eric J; Byrne, Marcus J; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie

    2015-09-08

    Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity.

  2. Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.

    PubMed

    Schankin, Andrea; Schubö, Anna

    2009-05-01

    Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.

  3. Central and peripheral vision loss differentially affects contextual cueing in visual search.

    PubMed

    Geringswald, Franziska; Pollmann, Stefan

    2015-09-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).

  4. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  5. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  6. The Role of Color in Search Templates for Real-world Target Objects.

    PubMed

    Nako, Rebecca; Smith, Tim J; Eimer, Martin

    2016-11-01

    During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.

  7. Lower region: a new cue for figure-ground assignment.

    PubMed

    Vecera, Shaun P; Vogel, Edward K; Woodman, Geoffrey F

    2002-06-01

    Figure-ground assignment is an important visual process; humans recognize, attend to, and act on figures, not backgrounds. There are many visual cues for figure-ground assignment. A new cue to figure-ground assignment, called lower region, is presented: Regions in the lower portion of a stimulus array appear more figurelike than regions in the upper portion of the display. This phenomenon was explored, and it was demonstrated that the lower-region preference is not influenced by contrast, eye movements, or voluntary spatial attention. It was found that the lower region is defined relative to the stimulus display, linking the lower-region preference to pictorial depth perception cues. The results are discussed in terms of the environmental regularities that this new figure-ground cue may reflect.

  8. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    PubMed Central

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2018-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457

  9. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    PubMed

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  10. Sensory modality of smoking cues modulates neural cue reactivity.

    PubMed

    Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J

    2013-01-01

    Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.

  11. Neural Responses to Visual Food Cues According to Weight Status: A Systematic Review of Functional Magnetic Resonance Imaging Studies

    PubMed Central

    Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110

  12. Neural responses to visual food cues according to weight status: a systematic review of functional magnetic resonance imaging studies.

    PubMed

    Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.

  13. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  14. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  15. Short-term visual memory for location in depth: A U-shaped function of time.

    PubMed

    Reeves, Adam; Lei, Quan

    2017-10-01

    Short-term visual memory was studied by displaying arrays of four or five numerals, each numeral in its own depth plane, followed after various delays by an arrow cue shown in one of the depth planes. Subjects reported the numeral at the depth cued by the arrow. Accuracy fell with increasing cue delay for the first 500 ms or so, and then recovered almost fully. This dipping pattern contrasts with the usual iconic decay observed for memory traces. The dip occurred with or without a verbal or color-shape retention load on working memory. In contrast, accuracy did not change with delay when a tonal cue replaced the arrow cue. We hypothesized that information concerning the depths of the numerals decays over time in sensory memory, but that cued recall is aided later on by transfer to a visual memory specialized for depth. This transfer is sufficiently rapid with a tonal cue to compensate for the sensory decay, but it is slowed by the need to tag the arrow cue's depth relative to the depths of the numerals, exposing a dip when sensation has decayed and transfer is not yet complete. A model with a fixed rate of sensory decay and varied transfer rates across individuals captures the dip as well as the cue modality effect.

  16. Eye Contact Is Crucial for Referential Communication in Pet Dogs.

    PubMed

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.

  17. Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.

    PubMed

    Gobel, Matthias S; Tufft, Miles R A; Richardson, Daniel C

    2018-05-01

    We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  18. Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder

    PubMed Central

    Lundahl, Leslie H.; Greenwald, Mark K.

    2016-01-01

    Aims Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)–related cues in cannabis dependent volunteers. Methods 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. Results In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Conclusions Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. PMID:27436749

  19. Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder.

    PubMed

    Lundahl, Leslie H; Greenwald, Mark K

    2016-09-01

    Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)-related cues in cannabis dependent volunteers. 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Freezing of Gait in Parkinson's Disease: An Overload Problem?

    PubMed

    Beck, Eric N; Ehgoetz Martens, Kaylena A; Almeida, Quincy J

    2015-01-01

    Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson's disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers' gait was affected. However, when visual cues were present, freezers' gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG.

  1. Freezing of Gait in Parkinson’s Disease: An Overload Problem?

    PubMed Central

    Beck, Eric N.; Ehgoetz Martens, Kaylena A.; Almeida, Quincy J.

    2015-01-01

    Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson’s disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers’ gait was affected. However, when visual cues were present, freezers’ gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG. PMID:26678262

  2. Relevance of visual cues for orientation at familiar sites by homing pigeons: an experiment in a circular arena.

    PubMed Central

    Gagliardo, A.; Odetti, F.; Ioalè, P.

    2001-01-01

    Whether pigeons use visual landmarks for orientation from familiar locations has been a subject of debate. By recording the directional choices of both anosmic and control pigeons while exiting from a circular arena we were able to assess the relevance of olfactory and visual cues for orientation from familiar sites. When the birds could see the surroundings, both anosmic and control pigeons were homeward oriented. When the view of the landscape was prevented by screens that surrounded the arena, the control pigeons exited from the arena approximately in the home direction, while the anosmic pigeons' distribution was not different from random. Our data suggest that olfactory and visual cues play a critical, but interchangeable, role for orientation at familiar sites. PMID:11571054

  3. Alcohol-cue exposure effects on craving and attentional bias in underage college-student drinkers.

    PubMed

    Ramirez, Jason J; Monti, Peter M; Colwill, Ruth M

    2015-06-01

    The effect of alcohol-cue exposure on eliciting craving has been well documented, and numerous theoretical models assert that craving is a clinically significant construct central to the motivation and maintenance of alcohol-seeking behavior. Furthermore, some theories propose a relationship between craving and attention, such that cue-induced increases in craving bias attention toward alcohol cues, which, in turn, perpetuates craving. This study examined the extent to which alcohol cues induce craving and bias attention toward alcohol cues among underage college-student drinkers. We designed within-subject cue-reactivity and visual-probe tasks to assess in vivo alcohol-cue exposure effects on craving and attentional bias on 39 undergraduate college drinkers (ages 18-20). Participants expressed greater subjective craving to drink alcohol following in vivo cue exposure to a commonly consumed beer compared with water exposure. Furthermore, following alcohol-cue exposure, participants exhibited greater attentional biases toward alcohol cues as measured by a visual-probe task. In addition to the cue-exposure effects on craving and attentional bias, within-subject differences in craving across sessions marginally predicted within-subject differences in attentional bias. Implications for both theory and practice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  4. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  5. A bilateral advantage for maintaining objects in visual short term memory.

    PubMed

    Holt, Jessica L; Delvenne, Jean-François

    2015-01-01

    Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754-763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127-133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1s, 3s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Visual Cues Given by Humans Are Not Sufficient for Asian Elephants (Elephas maximus) to Find Hidden Food

    PubMed Central

    Plotnik, Joshua M.; Pokorny, Jennifer J.; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F.; Hennessy, Alice; Hill, James; Hill, Virginia J.; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L.; Morrison, Violet M. B.; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K.; Ylli, Dora; Clayton, Nicola S.; Roberts, John; Fure, Emilie L.; Duchatelier, Alicia P.; Getz, David

    2013-01-01

    Recent research suggests that domesticated species – due to artificial selection by humans for specific, preferred behavioral traits – are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants’ inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation. PMID:23613804

  7. Visual cues given by humans are not sufficient for Asian elephants (Elephas maximus) to find hidden food.

    PubMed

    Plotnik, Joshua M; Pokorny, Jennifer J; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F; Hennessy, Alice; Hill, James; Hill, Virginia J; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L; Morrison, Violet M B; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K; Ylli, Dora; Clayton, Nicola S; Roberts, John; Fure, Emilie L; Duchatelier, Alicia P; Getz, David

    2013-01-01

    Recent research suggests that domesticated species--due to artificial selection by humans for specific, preferred behavioral traits--are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants' inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation.

  8. Dynamics of the spatial scale of visual attention revealed by brain event-related potentials

    NASA Technical Reports Server (NTRS)

    Luo, Y. J.; Greenwood, P. M.; Parasuraman, R.

    2001-01-01

    The temporal dynamics of the spatial scaling of attention during visual search were examined by recording event-related potentials (ERPs). A total of 16 young participants performed a search task in which the search array was preceded by valid cues that varied in size and hence in precision of target localization. The effects of cue size on short-latency (P1 and N1) ERP components, and the time course of these effects with variation in cue-target stimulus onset asynchrony (SOA), were examined. Reaction time (RT) to discriminate a target was prolonged as cue size increased. The amplitudes of the posterior P1 and N1 components of the ERP evoked by the search array were affected in opposite ways by the size of the precue: P1 amplitude increased whereas N1 amplitude decreased as cue size increased, particularly following the shortest SOA. The results show that when top-down information about the region to be searched is less precise (larger cues), RT is slowed and the neural generators of P1 become more active, reflecting the additional computations required in changing the spatial scale of attention to the appropriate element size to facilitate target discrimination. In contrast, the decrease in N1 amplitude with cue size may reflect a broadening of the spatial gradient of attention. The results provide electrophysiological evidence that changes in the spatial scale of attention modulate neural activity in early visual cortical areas and activate at least two temporally overlapping component processes during visual search.

  9. Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?

    PubMed

    Carman, Heidi M; Mactutus, Charles F

    2002-09-01

    Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.

  10. Early and Late Inhibitions Elicited by a Peripheral Visual Cue on Manual Response to a Visual Target: Are They Based on Cartesian Coordinates?

    ERIC Educational Resources Information Center

    Gawryszewski, Luiz G.; Carreiro, Luiz Renato R.; Magalhaes, Fabio V.

    2005-01-01

    A non-informative cue (C) elicits an inhibition of manual reaction time (MRT) to a visual target (T). We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7[degrees]) positions, the C-T angular distances (in polar…

  11. Visual attention to food cues in obesity: an eye-tracking study.

    PubMed

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  12. Nonlinear Y-Like Receptive Fields in the Early Visual Cortex: An Intermediate Stage for Building Cue-Invariant Receptive Fields from Subcortical Y Cells.

    PubMed

    Gharat, Amol; Baker, Curtis L

    2017-01-25

    Many of the neurons in early visual cortex are selective for the orientation of boundaries defined by first-order cues (luminance) as well as second-order cues (contrast, texture). The neural circuit mechanism underlying this selectivity is still unclear, but some studies have proposed that it emerges from spatial nonlinearities of subcortical Y cells. To understand how inputs from the Y-cell pathway might be pooled to generate cue-invariant receptive fields, we recorded visual responses from single neurons in cat Area 18 using linear multielectrode arrays. We measured responses to drifting and contrast-reversing luminance gratings as well as contrast modulation gratings. We found that a large fraction of these neurons have nonoriented responses to gratings, similar to those of subcortical Y cells: they respond at the second harmonic (F2) to high-spatial frequency contrast-reversing gratings and at the first harmonic (F1) to low-spatial frequency drifting gratings ("Y-cell signature"). For a given neuron, spatial frequency tuning for linear (F1) and nonlinear (F2) responses is quite distinct, similar to orientation-selective cue-invariant neurons. Also, these neurons respond to contrast modulation gratings with selectivity for the carrier (texture) spatial frequency and, in some cases, orientation. Their receptive field properties suggest that they could serve as building blocks for orientation-selective cue-invariant neurons. We propose a circuit model that combines ON- and OFF-center cortical Y-like cells in an unbalanced push-pull manner to generate orientation-selective, cue-invariant receptive fields. A significant fraction of neurons in early visual cortex have specialized receptive fields that allow them to selectively respond to the orientation of boundaries that are invariant to the cue (luminance, contrast, texture, motion) that defines them. However, the neural mechanism to construct such versatile receptive fields remains unclear. Using multielectrode recording, we found a large fraction of neurons in early visual cortex with receptive fields not selective for orientation that have spatial nonlinearities like those of subcortical Y cells. These are strong candidates for building cue-invariant orientation-selective neurons; we present a neural circuit model that pools such neurons in an imbalanced "push-pull" manner, to generate orientation-selective cue-invariant receptive fields. Copyright © 2017 the authors 0270-6474/17/370998-16$15.00/0.

  13. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    ERIC Educational Resources Information Center

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  14. Visual cues that are effective for contextual saccade adaptation.

    PubMed

    Azadi, Reza; Harwood, Mark R

    2014-06-01

    The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.

  15. Modelling effects on grid cells of sensory input during self‐motion

    PubMed Central

    Raudies, Florian; Hinman, James R.

    2016-01-01

    Abstract The neural coding of spatial location for memory function may involve grid cells in the medial entorhinal cortex, but the mechanism of generating the spatial responses of grid cells remains unclear. This review describes some current theories and experimental data concerning the role of sensory input in generating the regular spatial firing patterns of grid cells, and changes in grid cell firing fields with movement of environmental barriers. As described here, the influence of visual features on spatial firing could involve either computations of self‐motion based on optic flow, or computations of absolute position based on the angle and distance of static visual cues. Due to anatomical selectivity of retinotopic processing, the sensory features on the walls of an environment may have a stronger effect on ventral grid cells that have wider spaced firing fields, whereas the sensory features on the ground plane may influence the firing of dorsal grid cells with narrower spacing between firing fields. These sensory influences could contribute to the potential functional role of grid cells in guiding goal‐directed navigation. PMID:27094096

  16. Motion cue effects on human pilot dynamics in manual control

    NASA Technical Reports Server (NTRS)

    Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.

    1977-01-01

    Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.

  17. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  18. Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory.

    PubMed

    Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk

    2015-01-01

    Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.

  19. Feasibility and Preliminary Efficacy of Visual Cue Training to Improve Adaptability of Walking after Stroke: Multi-Centre, Single-Blind Randomised Control Pilot Trial.

    PubMed

    Hollands, Kristen L; Pelton, Trudy A; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M; Wing, Alan M; Tyson, Sarah F; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M

    2015-01-01

    Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services. Community dwelling stroke survivors with walking speed <0.8m/s, lower limb paresis and no severe visual impairments. Over-ground visual cue training (O-VCT), Treadmill based visual cue training (T-VCT), and Usual care (UC) delivered by physiotherapists twice weekly for 8 weeks. Participants were randomised using computer generated random permutated balanced blocks of randomly varying size. Recruitment, retention, adherence, adverse events and mobility and balance were measured before randomisation, post-intervention and at four weeks follow-up. Fifty-six participants participated (18 T-VCT, 19 O-VCT, 19 UC). Thirty-four completed treatment and follow-up assessments. Of the participants that completed, adherence was good with 16 treatments provided over (median of) 8.4, 7.5 and 9 weeks for T-VCT, O-VCT and UC respectively. No adverse events were reported. Post-treatment improvements in walking speed, symmetry, balance and functional mobility were seen in all treatment arms. Outpatient based treadmill and over-ground walking adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Clinicaltrials.gov NCT01600391.

  20. Developmental Role of Static, Dynamic, and Contextual Cues in Speech Perception

    ERIC Educational Resources Information Center

    Hicks, Candace Bourland; Ohde, Ralph N.

    2005-01-01

    The purpose of the current study was to examine the role of syllable duration context as well as static and dynamic acoustic properties in child and adult speech perception. Ten adults and eleven 4?5-year-old children identified a syllable as [ba] or [wa] (stop-glide contrast) in 3 conditions differing in synthetic continua. The 1st condition…

  1. Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory.

    PubMed

    van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G

    2015-11-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).

  2. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    PubMed Central

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  3. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    PubMed

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  4. Heuristics of reasoning and analogy in children's visual perspective taking.

    PubMed

    Yaniv, I; Shatz, M

    1990-10-01

    We propose that children's reasoning about others' visual perspectives is guided by simple heuristics based on a perceiver's line of sight and salient features of the object met by that line. In 3 experiments employing a 2-perceiver analogy task, children aged 3-6 were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight sufficed to distinguish it from alternatives. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed on the objects' sides facilitated solution of the symmetrical orientations. These and several other related findings reported in the literature are traced to children's reliance on heuristics of reasoning.

  5. Inhibition of return shortens perceived duration of a brief visual event.

    PubMed

    Osugi, Takayuki; Takeda, Yuji; Murakami, Ikuya

    2016-11-01

    We investigated the influence of attentional inhibition on the perceived duration of a brief visual event. Although attentional capture by an exogenous cue is known to prolong the perceived duration of an attended visual event, it remains unclear whether time perception is also affected by subsequent attentional inhibition at the location previously cued by an exogenous cue, an attentional phenomenon known as inhibition of return. In this study, we combined spatial cuing and duration judgment. After one second from the appearance of an uninformative peripheral cue either to the left or to the right, a target appeared at a cued side in one-third of the trials, which indeed yielded inhibition of return, and at the opposite side in another one-third of the trials. In the remaining trials, a cue appeared at a central box and one second later, a target appeared at either the left or right side. The target at the previously cued location was perceived to last shorter than the target presented at the opposite location, and shorter than the target presented after the central cue presentation. Therefore, attentional inhibition produced by a classical paradigm of inhibition of return decreased the perceived duration of a brief visual event. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Audiovisual speech perception in infancy: The influence of vowel identity and infants' productive abilities on sensitivity to (mis)matches between auditory and visual speech cues.

    PubMed

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-02-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  7. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  8. Multimodal cuing of autobiographical memory in semantic dementia.

    PubMed

    Greenberg, Daniel L; Ogar, Jennifer M; Viskontas, Indre V; Gorno Tempini, Maria Luisa; Miller, Bruce; Knowlton, Barbara J

    2011-01-01

    Individuals with semantic dementia (SD) have impaired autobiographical memory (AM), but the extent of the impairment has been controversial. According to one report (Westmacott, Leach, Freedman, & Moscovitch, 2001), patient performance was better when visual cues were used instead of verbal cues; however, the visual cues used in that study (family photographs) provided more retrieval support than do the word cues that are typically used in AM studies. In the present study, we sought to disentangle the effects of retrieval support and cue modality. We cued AMs of 5 patients with SD and 5 controls with words, simple pictures, and odors. Memories were elicited from childhood, early adulthood, and recent adulthood; they were scored for level of detail and episodic specificity. The patients were impaired across all time periods and stimulus modalities. Within the patient group, words and pictures were equally effective as cues (Friedman test; χ² = 0.25, p = .61), whereas odors were less effective than both words and pictures (for words vs. odors, χ² = 7.83, p = .005; for pictures vs. odors, χ² = 6.18, p = .01). There was no evidence of a temporal gradient in either group (for patients with SD, χ² = 0.24, p = .89; for controls, χ² < 2.07, p = .35). Once the effect of retrieval support is equated across stimulus modalities, there is no evidence for an advantage of visual cues over verbal cues. The greater impairment for olfactory cues presumably reflects degeneration of anterior temporal regions that support olfactory memory. (c) 2010 APA, all rights reserved.

  9. Different effects of color-based and location-based selection on visual working memory.

    PubMed

    Li, Qi; Saiki, Jun

    2015-02-01

    In the present study, we investigated how feature- and location-based selection influences visual working memory (VWM) encoding and maintenance. In Experiment 1, cue type (color, location) and cue timing (precue, retro-cue) were manipulated in a change detection task. The stimuli were color-location conjunction objects, and binding memory was tested. We found a significantly greater effect for color precues than for either color retro-cues or location precues, but no difference between location pre- and retro-cues, consistent with previous studies (e.g., Griffin & Nobre in Journal of Cognitive Neuroscience, 15, 1176-1194, 2003). We also found no difference between location and color retro-cues. Experiment 2 replicated the color precue advantage with more complex color-shape-location conjunction objects. Only one retro-cue effect was different from that in Experiment 1: Color retro-cues were significantly less effective than location retro-cues in Experiment 2, which may relate to a structural property of multidimensional VWM representations. In Experiment 3, a visual search task was used, and the result of a greater location than color precue effect suggests that the color precue advantage in a memory task is related to the modulation of VWM encoding rather than of sensation and perception. Experiment 4, using a task that required only memory for individual features but not for feature bindings, further confirmed that the color precue advantage is specific to binding memory. Together, these findings reveal new aspects of the interaction between attention and VWM and provide potentially important implications for the structural properties of VWM representations.

  10. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    PubMed

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  11. A designated odor-language integration system in the human brain.

    PubMed

    Olofsson, Jonas K; Hurley, Robert S; Bowman, Nicholas E; Bao, Xiaojun; Mesulam, M-Marsel; Gottfried, Jay A

    2014-11-05

    Odors are surprisingly difficult to name, but the mechanism underlying this phenomenon is poorly understood. In experiments using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the physiological basis of odor naming with a paradigm where olfactory and visual object cues were followed by target words that either matched or mismatched the cue. We hypothesized that word processing would not only be affected by its semantic congruency with the preceding cue, but would also depend on the cue modality (olfactory or visual). Performance was slower and less precise when linking a word to its corresponding odor than to its picture. The ERP index of semantic incongruity (N400), reflected in the comparison of nonmatching versus matching target words, was more constrained to posterior electrode sites and lasted longer on odor-cue (vs picture-cue) trials. In parallel, fMRI cross-adaptation in the right orbitofrontal cortex (OFC) and the left anterior temporal lobe (ATL) was observed in response to words when preceded by matching olfactory cues, but not by matching visual cues. Time-series plots demonstrated increased fMRI activity in OFC and ATL at the onset of the odor cue itself, followed by response habituation after processing of a matching (vs nonmatching) target word, suggesting that predictive perceptual representations in these regions are already established before delivery and deliberation of the target word. Together, our findings underscore the modality-specific anatomy and physiology of object identification in the human brain. Copyright © 2014 the authors 0270-6474/14/3414864-10$15.00/0.

  12. "Tunnel Vision": A Possible Keystone Stimulus Control Deficit in Autistic Children.

    ERIC Educational Resources Information Center

    Rincover, Arnold; And Others

    1986-01-01

    Three autistic boys (ages 9-13) were trained to select a card containing a stimulus array comprised of three visual cues. Decreased distance between cues resulted in responses to more cues, increased distance to fewer cues. Distances did not affect the responding of children matched for mental and chronological age. (Author/JW)

  13. Cue competition affects temporal dynamics of edge-assignment in human visual cortex.

    PubMed

    Brooks, Joseph L; Palmer, Stephen E

    2011-03-01

    Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.

  14. Late development of cue integration is linked to sensory fusion in cortex.

    PubMed

    Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko

    2015-11-02

    Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex

    PubMed Central

    Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko

    2015-01-01

    Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841

  16. Visual and Olfactory Floral Cues of Campanula (Campanulaceae) and Their Significance for Host Recognition by an Oligolectic Bee Pollinator

    PubMed Central

    Milet-Pinheiro, Paulo; Ayasse, Manfred; Dötterl, Stefan

    2015-01-01

    Oligolectic bees collect pollen from a few plants within a genus or family to rear their offspring, and are known to rely on visual and olfactory floral cues to recognize host plants. However, studies investigating whether oligolectic bees recognize distinct host plants by using shared floral cues are scarce. In the present study, we investigated in a comparative approach the visual and olfactory floral cues of six Campanula species, of which only Campanula lactiflora has never been reported as a pollen source of the oligolectic bee Ch. rapunculi. We hypothesized that the flowers of Campanula species visited by Ch. rapunculi share visual (i.e. color) and/or olfactory cues (scents) that give them a host-specific signature. To test this hypothesis, floral color and scent were studied by spectrophotometric and chemical analyses, respectively. Additionally, we performed bioassays within a flight cage to test the innate color preference of Ch. rapunculi. Our results show that Campanula flowers reflect the light predominantly in the UV-blue/blue bee-color space and that Ch. rapunculi displays a strong innate preference for these two colors. Furthermore, we recorded spiroacetals in the floral scent of all Campanula species, but Ca. lactiflora. Spiroacetals, rarely found as floral scent constituents but quite common among Campanula species, were recently shown to play a key function for host-flower recognition by Ch. rapunculi. We conclude that Campanula species share some visual and olfactory floral cues, and that neurological adaptations (i.e. vision and olfaction) of Ch. rapunculi innately drive their foraging flights toward host flowers. The significance of our findings for the evolution of pollen diet breadth in bees is discussed. PMID:26060994

  17. Adult-child differences in acoustic cue weighting are influenced by segmental context: Children are not always perceptually biased toward transitions

    NASA Astrophysics Data System (ADS)

    Mayo, Catherine; Turk, Alice

    2004-06-01

    It has been proposed that young children may have a perceptual preference for transitional cues [Nittrouer, S. (2002). J. Acoust. Soc. Am. 112, 711-719]. According to this proposal, this preference can manifest itself either as heavier weighting of transitional cues by children than by adults, or as heavier weighting of transitional cues than of other, more static, cues by children. This study tested this hypothesis by examining adults' and children's cue weighting for the contrasts /ess,aye,smcapi/-/sh,aye,smcapi/, /de/-/be/, /ta/-/da/, and /ti/-/di/. Children were found to weight transitions more heavily than did adults for the fricative contrast /ess,aye,smcapi/-/sh,aye,smcapi/, and were found to weight transitional cues more heavily than nontransitional cues for the voice-onset-time contrast /ta/-/da/. However, these two patterns of cue weighting were not found to hold for the contrasts /de/-/be/ and /ti/-/di/. Consistent with several studies in the literature, results suggest that children do not always show a bias towards vowel-formant transitions, but that cue weighting can differ according to segmental context, and possibly the physical distinctiveness of available acoustic cues.

  18. Multimedia instructions and cognitive load theory: effects of modality and cueing.

    PubMed

    Tabbers, Huib K; Martens, Rob L; van Merriënboer, Jeroen J G

    2004-03-01

    Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. The participants were 111 second-year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). The participants studied a web-based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self-report measures of mental effort were administered. Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non-laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner-paced, as opposed to the system-paced instructions used in earlier research.

  19. Differential effect of glucose ingestion on the neural processing of food stimuli in lean and overweight adults.

    PubMed

    Heni, Martin; Kullmann, Stephanie; Ketterer, Caroline; Guthoff, Martina; Bayer, Margarete; Staiger, Harald; Machicao, Fausto; Häring, Hans-Ulrich; Preissl, Hubert; Veit, Ralf; Fritsche, Andreas

    2014-03-01

    Eating behavior is crucial in the development of obesity and Type 2 diabetes. To further investigate its regulation, we studied the effects of glucose versus water ingestion on the neural processing of visual high and low caloric food cues in 12 lean and 12 overweight subjects by functional magnetic resonance imaging. We found body weight to substantially impact the brain's response to visual food cues after glucose versus water ingestion. Specifically, there was a significant interaction between body weight, condition (water versus glucose), and caloric content of food cues. Although overweight subjects showed a generalized reduced response to food objects in the fusiform gyrus and precuneus, the lean group showed a differential pattern to high versus low caloric foods depending on glucose versus water ingestion. Furthermore, we observed plasma insulin and glucose associated effects. The hypothalamic response to high caloric food cues negatively correlated with changes in blood glucose 30 min after glucose ingestion, while especially brain regions in the prefrontal cortex showed a significant negative relationship with increases in plasma insulin 120 min after glucose ingestion. We conclude that the postprandial neural processing of food cues is highly influenced by body weight especially in visual areas, potentially altering visual attention to food. Furthermore, our results underline that insulin markedly influences prefrontal activity to high caloric food cues after a meal, indicating that postprandial hormones may be potential players in modulating executive control. Copyright © 2013 Wiley Periodicals, Inc.

  20. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  1. Wingbeat frequency-sweep and visual stimuli for trapping male Aedes aegypti (Diptera: Culicidae)

    USDA-ARS?s Scientific Manuscript database

    Combinations of female wingbeat acoustic cues and visual cues were evaluated to determine their potential for use in male Aedes aegypti (L.) traps in peridomestic environments. A modified Centers for Disease control (CDC) light trap using a 350-500 Hz frequency-sweep broadcast from a speaker as an a...

  2. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    USDA-ARS?s Scientific Manuscript database

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  3. Visual Cues Generated during Action Facilitate 14-Month-Old Infants' Mental Rotation

    ERIC Educational Resources Information Center

    Antrilli, Nick K.; Wang, Su-hua

    2016-01-01

    Although action experience has been shown to enhance the development of spatial cognition, the mechanism underlying the effects of action is still unclear. The present research examined the role of visual cues generated during action in promoting infants' mental rotation. We sought to clarify the underlying mechanism by decoupling different…

  4. Categorically Defined Targets Trigger Spatiotemporal Visual Attention

    ERIC Educational Resources Information Center

    Wyble, Brad; Bowman, Howard; Potter, Mary C.

    2009-01-01

    Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…

  5. Visual Cues and Listening Effort: Individual Variability

    ERIC Educational Resources Information Center

    Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.

    2011-01-01

    Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…

  6. Visual Sonority Modulates Infants' Attraction to Sign Language

    ERIC Educational Resources Information Center

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  7. Impaired Visual Attention in Children with Dyslexia.

    ERIC Educational Resources Information Center

    Heiervang, Einar; Hugdahl, Kenneth

    2003-01-01

    A cue-target visual attention task was administered to 25 children (ages 10-12) with dyslexia. Results showed a general pattern of slower responses in the children with dyslexia compared to controls. Subjects also had longer reaction times in the short and long cue-target interval conditions (covert and overt shift of attention). (Contains…

  8. Visual Cues, Student Sex, Material Taught, and the Magnitude of Teacher Expectancy Effects.

    ERIC Educational Resources Information Center

    Badini, Aldo A.; Rosenthal, Robert

    1989-01-01

    Conducts an experiment on teacher expectancy effects to investigate the simultaneous effects of student gender, communication channel, and type of material taught (vocabulary and reasoning). Finds that the magnitude of teacher expectation effects was greater when students had access to visual cues, especially when the students were female. (MS)

  9. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  10. Speaker Identity Supports Phonetic Category Learning

    ERIC Educational Resources Information Center

    Mani, Nivedita; Schneider, Signe

    2013-01-01

    Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…

  11. Audio-Visual Speech Perception: A Developmental ERP Investigation

    ERIC Educational Resources Information Center

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  12. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  13. Enhancing Visual Search Abilities of People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

  14. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    PubMed

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    PubMed

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  16. Subjective scaling of spatial room acoustic parameters influenced by visual environmental cues

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas

    2010-01-01

    Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music∕speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D∕R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment. PMID:20968367

  17. Discourse intervention strategies in Alzheimer's disease: Eye-tracking and the effect of visual cues in conversation.

    PubMed

    Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth

    2014-01-01

    The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends.

  18. Visually-guided attention enhances target identification in a complex auditory scene.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G

    2007-06-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.

  19. Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

    PubMed Central

    Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.

    2007-01-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308

  20. Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.

    PubMed

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-09-20

    Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.

  1. Interaction of color and geometric cues in depth perception: when does "red" mean "near"?

    PubMed

    Guibal, Christophe R C; Dresp, Birgitta

    2004-12-01

    Luminance and color are strong and self-sufficient cues to pictorial depth in visual scenes and images. The present study investigates the conditions under which luminance or color either strengthens or overrides geometric depth cues. We investigated how luminance contrast associated with the color red and color contrast interact with relative height in the visual field, partial occlusion, and interposition to determine the probability that a given figure presented in a pair is perceived as "nearer" than the other. Latencies of "near" responses were analyzed to test for effects of attentional selection. Figures in a pair were supported by luminance contrast (Experiment 1) or isoluminant color contrast (Experiment 2) and combined with one of the three geometric cues. The results of Experiment 1 show that the luminance contrast of a color (here red), when it does not interact with other colors, produces the same effects as achromatic luminance contrasts. The probability of "near" increases with the luminance contrast of the color stimulus, the latencies for "near" responses decrease with increasing luminance contrast. Partial occlusion is found to be a strong enough pictorial cue to support a weaker red luminance contrast. Interposition cues lose out against cues of spatial position and partial occlusion. The results of Experiment 2, with isoluminant displays of varying color contrast, reveal that red color contrast on a light background supported by any of the three geometric cues wins over green or white supported by any of the three geometric cues. On a dark background, red color contrast supported by the interposition cue loses out against green or white color contrast supported by partial occlusion. These findings reveal that color is not an independent depth cue, but is strongly influenced by luminance contrast and stimulus geometry. Systematically shorter response latencies for stronger "near" percepts demonstrate that selective visual attention reliably detects the most likely depth cue combination in a given configuration.

  2. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  3. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    PubMed

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  4. Gait parameter control timing with dynamic manual contact or visual cues

    PubMed Central

    Shi, Peter; Werner, William

    2016-01-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. PMID:26936979

  5. Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality

    NASA Astrophysics Data System (ADS)

    Hua, Hong

    2017-05-01

    Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).

  6. Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach.

    PubMed

    Byrne, Patrick A; Crawford, J Douglas

    2010-06-01

    It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.

  7. Do preschool children learn to read words from environmental prints?

    PubMed

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.

  8. Do Preschool Children Learn to Read Words from Environmental Prints?

    PubMed Central

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677

  9. Starvation period and age affect the response of female Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae) to odor and visual cues.

    PubMed

    Davidson, Melanie M; Butler, Ruth C; Teulon, David A J

    2006-07-01

    The effects of starvation or age on the walking or flying response of female Frankliniella occidentalis to visual and/or odor cues in two types of olfactometer were examined in the laboratory. The response of walking thrips starved for 0, 1, 4, or 24h to an odor cue (1microl of 10% p-anisaldehyde) was examined in a Y-tube olfactometer. The take-off and landing response of thrips (unknown age) starved for 0, 1, 4, 24, 48 or 72h, or of thrips of different ages (2-3 days or 10-13 days post-adult emergence) starved for 24h, to a visual cue (98 cm(2) yellow sticky trap) and/or an odor cue (0.5 or 1.0 ml p-anisaldehyde) was examined in a wind tunnel. More thrips walked up the odor-laden arm in the Y-tube when starved for at least 4h (76%) than satiated thrips (58.7%) or those starved for 1h (62.7%, P<0.05). In the wind tunnel experiments the percentage of thrips to fly or land on the sticky trap increased between satiated thrips (7.3% to fly, 3.3% on trap) and those starved for 4h (81.2% to fly, 29% on trap) and decreased between thrips starved for 48 (74.5% to fly, 23% on trap) and 72 h (56.5% to fly, 15.5% on trap, P<0.05). Fewer younger thrips (38.8%) landed on a sticky trap containing a yellow visual cue of, those that flew, than older thrips (70.4%, P<0.05), although a similar percentage of thrips flew regardless of age or type of cue present in the wind tunnel (average 44%, P>0.05).

  10. The Role of Visual Cues in Microgravity Spatial Orientation

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.

  11. Whether or not to eat: A controlled laboratory study of discriminative cueing effects on food intake in humans.

    PubMed

    Ridley-Siegert, Thomas L; Crombag, Hans S; Yeomans, Martin R

    2015-12-01

    There is a wealth of data showing a large impact of food cues on human ingestion, yet most studies use pictures of food where the precise nature of the associations between the cue and food is unclear. To test whether novel cues which were associated with the opportunity of winning access to food images could also impact ingestion, 63 participants participated in a game in which novel visual cues signalled whether responding on a keyboard would win (a picture of) chocolate, crisps, or nothing. Thirty minutes later, participants were given an ad libitum snack-intake test during which the chocolate-paired cue, the crisp-paired cue, the non-winning cue and no cue were presented as labels on the food containers. The presence of these cues significantly altered overall intake of the snack foods; participants presented with food labelled with the cue that had been associated with winning chocolate ate significantly more than participants who had been given the same products labelled with the cue associated with winning nothing, and in the presence of the cue signalling the absence of food reward participants tended to eat less than all other conditions. Surprisingly, cue-dependent changes in food consumption were unaffected by participants' level of contingency awareness. These results suggest that visual cues that have been pre-associated with winning, but not consuming, a liked food reward modify food intake consistent with current ideas that the abundance of food associated cues may be one factor underlying the 'obesogenic environment'. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Driver hand activity analysis in naturalistic driving studies: challenges, algorithms, and experimental studies

    NASA Astrophysics Data System (ADS)

    Ohn-Bar, Eshed; Martin, Sujitha; Trivedi, Mohan Manubhai

    2013-10-01

    We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.

  13. Rethinking human visual attention: spatial cueing effects and optimality of decisions by honeybees, monkeys and humans.

    PubMed

    Eckstein, Miguel P; Mack, Stephen C; Liston, Dorion B; Bogush, Lisa; Menzel, Randolf; Krauzlis, Richard J

    2013-06-07

    Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. An Investigation of Visual, Aural, Motion and Control Movement Cues.

    ERIC Educational Resources Information Center

    Matheny, W. G.; And Others

    A study was conducted to determine the ways in which multi-sensory cues can be simulated and effectively used in the training of pilots. Two analytical bases, one called the stimulus environment approach and the other an information array approach, are developed along with a cue taxonomy. Cues are postulated on the basis of information gained from…

  15. A novel experimental method for measuring vergence and accommodation responses to the main near visual cues in typical and atypical groups.

    PubMed

    Horwood, Anna M; Riddell, Patricia M

    2009-01-01

    Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.

  16. A Novel Experimental Method for Measuring Vergence and Accommodation Responses to the Main Near Visual Cues in Typical and Atypical Groups

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    Binocular disparity, blur and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3m and 2m. By separating the three main near cues we can explore their relative weighting in three, two, one and zero cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable inter-participant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development and emmetropisation. PMID:19301186

  17. Research on integration of visual and motion cues for flight simulation and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Curry, R. E.

    1977-01-01

    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.

  18. The role of visual and mechanosensory cues in structuring forward flight in Drosophila melanogaster.

    PubMed

    Budick, Seth A; Reiser, Michael B; Dickinson, Michael H

    2007-12-01

    It has long been known that many flying insects use visual cues to orient with respect to the wind and to control their groundspeed in the face of varying wind conditions. Much less explored has been the role of mechanosensory cues in orienting insects relative to the ambient air. Here we show that Drosophila melanogaster, magnetically tethered so as to be able to rotate about their yaw axis, are able to detect and orient into a wind, as would be experienced during forward flight. Further, this behavior is velocity dependent and is likely subserved, at least in part, by the Johnston's organs, chordotonal organs in the antennae also involved in near-field sound detection. These wind-mediated responses may help to explain how flies are able to fly forward despite visual responses that might otherwise inhibit this behavior. Expanding visual stimuli, such as are encountered during forward flight, are the most potent aversive visual cues known for D. melanogaster flying in a tethered paradigm. Accordingly, tethered flies strongly orient towards a focus of contraction, a problematic situation for any animal attempting to fly forward. We show in this study that wind stimuli, transduced via mechanosensory means, can compensate for the aversion to visual expansion and thus may help to explain how these animals are indeed able to maintain forward flight.

  19. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    PubMed

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Lack of visual field asymmetries for spatial cueing in reading parafoveal Chinese characters.

    PubMed

    Luo, Chunming; Dell'Acqua, Roberto; Proctor, Robert W; Li, Xingshan

    2015-12-01

    In two experiments, we investigated whether visual field (VF) asymmetries of spatial cueing are involved in reading parafoveal Chinese characters. These characters are different from linearly arranged alphabetic words in that they are logograms that are confined to a constant, square-shaped area and are composed of only a few radicals. We observed a cueing effect, but it did not vary with the VF in which the Chinese character was presented, regardless of whether the cue validity (the ratio of validly to invalidly cued targets) was 1:1 or 7:3. These results suggest that VF asymmetries of spatial cueing do not affect the reading of parafoveal Chinese characters, contrary to the reading of alphabetic words. The mechanisms of spatial attention in reading parafoveal English-like words and Chinese characters are discussed.

  1. Response-specifying cue for action interferes with perception of feature-sharing stimuli.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-06-01

    Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.

  2. Matching cue size and task properties in exogenous attention.

    PubMed

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  3. Biasing spatial attention with semantic information: an event coding approach.

    PubMed

    Amer, Tarek; Gozli, Davood G; Pratt, Jay

    2017-04-21

    We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.

  4. Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space.

    PubMed

    Zang, Xuelian; Shi, Zhuanghua; Müller, Hermann J; Conci, Markus

    2017-05-01

    Learning of spatial inter-item associations can speed up visual search in everyday life, an effect referred to as contextual cueing (Chun & Jiang, 1998). Whereas previous studies investigated contextual cueing primarily using 2D layouts, the current study examined how 3D depth influences contextual learning in visual search. In two experiments, the search items were presented evenly distributed across front and back planes in an initial training session. In the subsequent test session, the search items were either swapped between the front and back planes (Experiment 1) or between the left and right halves (Experiment 2) of the displays. The results showed that repeated spatial contexts were learned efficiently under 3D viewing conditions, facilitating search in the training sessions, in both experiments. Importantly, contextual cueing remained robust and virtually unaffected following the swap of depth planes in Experiment 1, but it was substantially reduced (to nonsignificant levels) following the left-right side swap in Experiment 2. This result pattern indicates that spatial, but not depth, inter-item variations limit effective contextual guidance. Restated, contextual cueing (even under 3D viewing conditions) is primarily based on 2D inter-item associations, while depth-defined spatial regularities are probably not encoded during contextual learning. Hence, changing the depth relations does not impact the cueing effect.

  5. Spectrotemporal weighting of binaural cues: Effects of a diotic interferer on discrimination of dynamic interaural differences

    PubMed Central

    Bibee, Jacqueline M.; Stecker, G. Christopher

    2016-01-01

    Spatial judgments are often dominated by low-frequency binaural cues and onset cues when binaural cues vary across the spectrum and duration, respectively, of a brief sound. This study combined these dimensions to assess the spectrotemporal weighting of binaural information. Listeners discriminated target interaural time difference (ITD) and interaural level difference (ILD) carried by the onset, offset, or full duration of a 4-kHz Gabor click train with a 2-ms period in the presence or absence of a diotic 500-Hz interferer tone. ITD and ILD thresholds were significantly elevated by the interferer in all conditions and by a similar amount to previous reports for static cues. Binaural interference was dramatically greater for ITD targets lacking onset cues compared to onset and full-duration conditions. Binaural interference for ILD targets was similar across dynamic-cue conditions. These effects mirror the baseline discriminability of dynamic ITD and ILD cues [Stecker and Brown. (2010). J. Acoust. Soc. Am. 127, 3092–3103], consistent with stronger interference for less-robust/higher-variance cues. The results support the view that binaural cue integration occurs simultaneously across multiple variance-weighted dimensions, including time and frequency. PMID:27794286

  6. Spectrotemporal weighting of binaural cues: Effects of a diotic interferer on discrimination of dynamic interaural differences.

    PubMed

    Bibee, Jacqueline M; Stecker, G Christopher

    2016-10-01

    Spatial judgments are often dominated by low-frequency binaural cues and onset cues when binaural cues vary across the spectrum and duration, respectively, of a brief sound. This study combined these dimensions to assess the spectrotemporal weighting of binaural information. Listeners discriminated target interaural time difference (ITD) and interaural level difference (ILD) carried by the onset, offset, or full duration of a 4-kHz Gabor click train with a 2-ms period in the presence or absence of a diotic 500-Hz interferer tone. ITD and ILD thresholds were significantly elevated by the interferer in all conditions and by a similar amount to previous reports for static cues. Binaural interference was dramatically greater for ITD targets lacking onset cues compared to onset and full-duration conditions. Binaural interference for ILD targets was similar across dynamic-cue conditions. These effects mirror the baseline discriminability of dynamic ITD and ILD cues [Stecker and Brown. (2010). J. Acoust. Soc. Am. 127, 3092-3103], consistent with stronger interference for less-robust/higher-variance cues. The results support the view that binaural cue integration occurs simultaneously across multiple variance-weighted dimensions, including time and frequency.

  7. The Effect of Retrieval Cues on Visual Preferences and Memory in Infancy: Evidence for a Four-Phase Attention Function.

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Hernandez-Reif, Maria; Pickens, Jeffrey N.

    1997-01-01

    Tested hypothesis from Bahrick and Pickens' infant attention model that retrieval cues increase memory accessibility and shift visual preferences toward greater novelty to resemble recent memories. Found that after retention intervals associated with remote or intermediate memory, previous familiarity preferences shifted to null or novelty…

  8. Analysis of Parallel and Transverse Visual Cues on the Gait of Individuals with Idiopathic Parkinson's Disease

    ERIC Educational Resources Information Center

    de Melo Roiz, Roberta; Azevedo Cacho, Enio Walker; Cliquet, Alberto, Jr.; Barasnevicius Quagliato, Elizabeth Maria Aparecida

    2011-01-01

    Idiopathic Parkinson's disease (IPD) has been defined as a chronic progressive neurological disorder with characteristics that generate changes in gait pattern. Several studies have reported that appropriate external influences, such as visual or auditory cues may improve the gait pattern of patients with IPD. Therefore, the objective of this…

  9. Specific and Nonspecific Neural Activity during Selective Processing of Visual Representations in Working Memory

    ERIC Educational Resources Information Center

    Oh, Hwamee; Leung, Hoi-Chung

    2010-01-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two…

  10. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    ERIC Educational Resources Information Center

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  11. Sound Affects the Speed of Visual Processing

    ERIC Educational Resources Information Center

    Keetels, Mirjam; Vroomen, Jean

    2011-01-01

    The authors examined the effects of a task-irrelevant sound on visual processing. Participants were presented with revolving clocks at or around central fixation and reported the hand position of a target clock at the time an exogenous cue (1 clock turning red) or an endogenous cue (a line pointing toward 1 of the clocks) was presented. A…

  12. Effects of Visual Cues and Self-Explanation Prompts: Empirical Evidence in a Multimedia Environment

    ERIC Educational Resources Information Center

    Lin, Lijia; Atkinson, Robert K.; Savenye, Wilhelmina C.; Nelson, Brian C.

    2016-01-01

    The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load, and intrinsic motivation in an interactive multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were…

  13. Young Children's Visual Attention to Environmental Print as Measured by Eye Tracker Analysis

    ERIC Educational Resources Information Center

    Neumann, Michelle M.; Acosta, Camillia; Neumann, David L.

    2014-01-01

    Environmental print, such as signs and product labels, consist of both print and contextual cues designed to attract the visual attention of the reader. However, contextual cues may draw young children's attention away from the print, thus questioning the value of environmental print in early reading development. Eye tracker technology was used to…

  14. Orienting of Visual Attention among Persons with Autism Spectrum Disorders: Reading versus Responding to Symbolic Cues

    ERIC Educational Resources Information Center

    Landry, Oriane; Mitchell, Peter L.; Burack, Jacob A.

    2009-01-01

    Background: Are persons with autism spectrum disorders (ASD) slower than typically developing individuals to read the meaning of a symbolic cue in a visual orienting paradigm? Methods: Participants with ASD (n = 18) and performance mental age (PMA) matched typically developing children (n = 16) completed two endogenous orienting conditions in…

  15. EFFECTS AND INTERACTIONS OF AUDITORY AND VISUAL CUES IN ORAL COMMUNICATION.

    ERIC Educational Resources Information Center

    KEYS, JOHN W.; AND OTHERS

    VISUAL AND AUDITORY CUES WERE TESTED, SEPARATELY AND JOINTLY, TO DETERMINE THE DEGREE OF THEIR CONTRIBUTION TO IMPROVING OVERALL SPEECH SKILLS OF THE AURALLY HANDICAPPED. EIGHT SOUND INTENSITY LEVELS (FROM 6 TO 15 DECIBELS) WERE USED IN PRESENTING PHONETICALLY BALANCED WORD LISTS AND MULTIPLE-CHOICE INTELLIGIBILITY LISTS TO A SAMPLE OF 24…

  16. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  17. Investigation of outside visual cues required for low speed and hover

    NASA Technical Reports Server (NTRS)

    Hoh, R. H.

    1985-01-01

    Knowledge of the visual cues required in the performance of stabilized hover in VTOL aircraft is a prerequisite for the development of both cockpit displays and ground-based simulation systems. Attention is presently given to the viability of experimental test flight techniques as the bases for the identification of essential external cues in aggressive and precise low speed and hovering tasks. The analysis and flight test program conducted employed a helicopter and a pilot wearing lenses that could be electronically fogged, where the primary variables were field-of-view, large object 'macrotexture', and fine detail 'microtexture', in six different fields-of-view. Fundamental metrics are proposed for the quantification of the visual field, to allow comparisons between tests, simulations, and aircraft displays.

  18. Increasing Hand Washing Compliance With a Simple Visual Cue

    PubMed Central

    Boyer, Brian T.; Menachemi, Nir; Huerta, Timothy R.

    2014-01-01

    We tested the efficacy of a simple, visual cue to increase hand washing with soap and water. Automated towel dispensers in 8 public bathrooms were set to present a towel either with or without activation by users. We set the 2 modes to operate alternately for 10 weeks. Wireless sensors were used to record entry into bathrooms. Towel and soap consumption rates were checked weekly. There were 97 351 hand-washing opportunities across all restrooms. Towel use was 22.6% higher (P = .05) and soap use was 13.3% higher (P = .003) when the dispenser presented the towel without user activation than when activation was required. Results showed that a visual cue can increase hand-washing compliance in public facilities. PMID:24228670

  19. Feasibility and Preliminary Efficacy of Visual Cue Training to Improve Adaptability of Walking after Stroke: Multi-Centre, Single-Blind Randomised Control Pilot Trial

    PubMed Central

    Hollands, Kristen L.; Pelton, Trudy A.; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M.; Wing, Alan M.; Tyson, Sarah F.; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M.

    2015-01-01

    Objectives Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. Design This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services Participants Community dwelling stroke survivors with walking speed <0.8m/s, lower limb paresis and no severe visual impairments Intervention Over-ground visual cue training (O-VCT), Treadmill based visual cue training (T-VCT), and Usual care (UC) delivered by physiotherapists twice weekly for 8 weeks. Main outcome measures: Participants were randomised using computer generated random permutated balanced blocks of randomly varying size. Recruitment, retention, adherence, adverse events and mobility and balance were measured before randomisation, post-intervention and at four weeks follow-up. Results Fifty-six participants participated (18 T-VCT, 19 O-VCT, 19 UC). Thirty-four completed treatment and follow-up assessments. Of the participants that completed, adherence was good with 16 treatments provided over (median of) 8.4, 7.5 and 9 weeks for T-VCT, O-VCT and UC respectively. No adverse events were reported. Post-treatment improvements in walking speed, symmetry, balance and functional mobility were seen in all treatment arms. Conclusions Outpatient based treadmill and over-ground walking adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Trial Registration Clinicaltrials.gov NCT01600391 PMID:26445137

  20. Children Use Nonverbal Cues to Make Inferences About Social Power

    PubMed Central

    Brey, Elizabeth; Shutts, Kristin

    2016-01-01

    Four studies (N=192) tested whether young children use nonverbal information to make inferences about differences in social power. Five- and 6-year-old children were able to determine which of two adults was “in charge” in dynamic videotaped conversations (Study 1) and in static photographs (Study 4) using only nonverbal cues. Younger children (3–4 years) were not successful in Study 1 or Study 4. Removing irrelevant linguistic information from conversations did not improve the performance of 3–4-year-old children (Study 3), but including relevant linguistic cues did (Study 2). Thus, at least by 5 years of age, children show sensitivity to some of the same nonverbal cues adults use to determine other people’s social roles. PMID:25521913

  1. Spatiotemporal gait changes with use of an arm swing cueing device in people with Parkinson's disease.

    PubMed

    Thompson, Elizabeth; Agada, Peter; Wright, W Geoffrey; Reimann, Hendrik; Jeka, John

    2017-10-01

    Impaired arm swing is a common motor symptom of Parkinson's disease (PD), and correlates with other gait impairments and increased risk of falls. Studies suggest that arm swing is not merely a passive consequence of trunk rotation during walking, but an active component of gait. Thus, techniques to enhance arm swing may improve gait characteristics. There is currently no portable device to measure arm swing and deliver immediate cues for larger movement. Here we test report pilot testing of such a device, ArmSense (patented), using a crossover repeated-measures design. Twelve people with PD walked in a video-recorded gym space at self-selected comfortable and fast speeds. After baseline, cues were given either visually using taped targets on the floor to increase step length or through vibrations at the wrist using ArmSense to increase arm swing amplitude. Uncued walking then followed, to assess retention. Subjects successfully reached cueing targets on >95% of steps. At a comfortable pace, step length increased during both visual cueing and ArmSense cueing. However, we observed increased medial-lateral trunk sway with visual cueing, possibly suggesting decreased gait stability. In contrast, no statistically significant changes in trunk sway were observed with ArmSense cues compared to baseline walking. At a fast pace, changes in gait parameters were less systematic. Even though ArmSense cues only specified changes in arm swing amplitude, we observed changes in multiple gait parameters, reflecting the active role arm swing plays in gait and suggesting a new therapeutic path to improve mobility in people with PD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Two cloud-based cues for estimating scene structure and camera calibration.

    PubMed

    Jacobs, Nathan; Abrams, Austin; Pless, Robert

    2013-10-01

    We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.

  3. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  4. Top-down influences on visual attention during listening are modulated by observer sex.

    PubMed

    Shen, John; Itti, Laurent

    2012-07-15

    In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.

  5. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  6. [Visual cues as a therapeutic tool in Parkinson's disease. A systematic review].

    PubMed

    Muñoz-Hellín, Elena; Cano-de-la-Cuerda, Roberto; Miangolarra-Page, Juan Carlos

    2013-01-01

    Sensory stimuli or sensory cues are being used as a therapeutic tool for improving gait disorders in Parkinson's disease patients, but most studies seem to focus on auditory stimuli. The aim of this study was to conduct a systematic review regarding the use of visual cues over gait disorders, dual tasks during gait, freezing and the incidence of falls in patients with Parkinson to obtain therapeutic implications. We conducted a systematic review in main databases such as Cochrane Database of Systematic Reviews, TripDataBase, PubMed, Ovid MEDLINE, Ovid EMBASE and Physiotherapy Evidence Database, during 2005 to 2012, according to the recommendations of the Consolidated Standards of Reporting Trials, evaluating the quality of the papers included with the Downs & Black Quality Index. 21 articles were finally included in this systematic review (with a total of 892 participants) with variable methodological quality, achieving an average of 17.27 points in the Downs and Black Quality Index (range: 11-21). Visual cues produce improvements over temporal-spatial parameters in gait, turning execution, reducing the appearance of freezing and falls in Parkinson's disease patients. Visual cues appear to benefit dual tasks during gait, reducing the interference of the second task. Further studies are needed to determine the preferred type of stimuli for each stage of the disease. Copyright © 2012 SEGG. Published by Elsevier Espana. All rights reserved.

  7. Iconic memory for the gist of natural scenes.

    PubMed

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Hunger-Dependent Enhancement of Food Cue Responses in Mouse Postrhinal Cortex and Lateral Amygdala.

    PubMed

    Burgess, Christian R; Ramesh, Rohan N; Sugden, Arthur U; Levandowski, Kirsten M; Minnig, Margaret A; Fenselau, Henning; Lowell, Bradford B; Andermann, Mark L

    2016-09-07

    The needs of the body can direct behavioral and neural processing toward motivationally relevant sensory cues. For example, human imaging studies have consistently found specific cortical areas with biased responses to food-associated visual cues in hungry subjects, but not in sated subjects. To obtain a cellular-level understanding of these hunger-dependent cortical response biases, we performed chronic two-photon calcium imaging in postrhinal association cortex (POR) and primary visual cortex (V1) of behaving mice. As in humans, neurons in mouse POR, but not V1, exhibited biases toward food-associated cues that were abolished by satiety. This emergent bias was mirrored by the innervation pattern of amygdalo-cortical feedback axons. Strikingly, these axons exhibited even stronger food cue biases and sensitivity to hunger state and trial history. These findings highlight a direct pathway by which the lateral amygdala may contribute to state-dependent cortical processing of motivationally relevant sensory cues. Published by Elsevier Inc.

  9. Differential effects of visual-spatial attention on response latency and temporal-order judgment.

    PubMed

    Neumann, O; Esselmann, U; Klotz, W

    1993-01-01

    Theorists from both classical structuralism and modern attention research have claimed that attention to a sensory stimulus enhances processing speed. However, they have used different operations to measure this effect, viz., temporal-order judgment (TOJ) and reaction-time (RT) measurement. We report two experiments that compared the effect of a spatial cue on RT and TOJ. Experiment 1 demonstrated that a nonmasked, peripheral cue (the brief brightening of a box) affected both RT and TOJ. However, the former effect was significantly larger than the latter. A masked cue had a smaller, but reliable, effect on TOJ. In Experiment 2, the effects of a masked cue on RT and TOJ were compared under identical stimulus conditions. While the cue had a strong effect on RT, it left TOJ unaffected. These results suggest that a spatial cue may have dissociable effects on response processes and the processes that lead to a conscious percept. Implications for the concept of direct parameter specification and for theories of visual attention are discussed.

  10. An investigation of motion base cueing and G-seat cueing on pilot performance in a simulator

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.

    1983-01-01

    The effect of G-seat cueing (GSC) and motion-base cueing (MBC) on performance of a pursuit-tracking task is studied using the visual motion simulator (VMS) at Langley Research Center. The G-seat, the six-degree-of-freedom synergistic platform motion system, the visual display, the cockpit hardware, and the F-16 aircraft mathematical model are characterized. Each of 8 active F-15 pilots performed the 2-min-43-sec task 10 times for each experimental mode: no cue, GSC, MBC, and GSC + MBC; the results were analyzed statistically in terms of the RMS values of vertical and lateral tracking error. It is shown that lateral error is significantly reduced by either GSC or MBC, and that the combination of cues produces a further, significant decrease. Vertical error is significantly decreased by GSC with or without MBC, whereas MBC effects vary for different pilots. The pattern of these findings is roughly duplicated in measurements of stick force applied for roll and pitch correction.

  11. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Social learning of predators in the dark: understanding the role of visual, chemical and mechanical information.

    PubMed

    Manassa, R P; McCormick, M I; Chivers, D P; Ferrari, M C O

    2013-08-22

    The ability of prey to observe and learn to recognize potential predators from the behaviour of nearby individuals can dramatically increase survival and, not surprisingly, is widespread across animal taxa. A range of sensory modalities are available for this learning, with visual and chemical cues being well-established modes of transmission in aquatic systems. The use of other sensory cues in mediating social learning in fishes, including mechano-sensory cues, remains unexplored. Here, we examine the role of different sensory cues in social learning of predator recognition, using juvenile damselfish (Amphiprion percula). Specifically, we show that a predator-naive observer can socially learn to recognize a novel predator when paired with a predator-experienced conspecific in total darkness. Furthermore, this study demonstrates that when threatened, individuals release chemical cues (known as disturbance cues) into the water. These cues induce an anti-predator response in nearby individuals; however, they do not facilitate learnt recognition of the predator. As such, another sensory modality, probably mechano-sensory in origin, is responsible for information transfer in the dark. This study highlights the diversity of sensory cues used by coral reef fishes in a social learning context.

  13. Global Repetition Influences Contextual Cueing

    PubMed Central

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1–4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect. PMID:29636716

  14. Action Experience Changes Attention to Kinematic Cues

    PubMed Central

    Filippi, Courtney A.; Woodward, Amanda L.

    2016-01-01

    The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-months-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation) about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue) or did not match the orientation of the rod (incongruent cue). To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first) × 2 (congruent kinematic cue vs. incongruent kinematic cue) between-subjects design. We show that 13-months-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics. PMID:26913012

  15. Global Repetition Influences Contextual Cueing.

    PubMed

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Assumpção, Leonardo; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1-4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect.

  16. Exogenous attention influences visual short-term memory in infants.

    PubMed

    Ross-Sheehy, Shannon; Oakes, Lisa M; Luck, Steven J

    2011-05-01

    Two experiments examined the hypothesis that developing visual attentional mechanisms influence infants' Visual Short-Term Memory (VSTM) in the context of multiple items. Five- and 10-month-old infants (N = 76) received a change detection task in which arrays of three differently colored squares appeared and disappeared. On each trial one square changed color and one square was cued; sometimes the cued item was the changing item, and sometimes the changing item was not the cued item. Ten-month-old infants exhibited enhanced memory for the cued item when the cue was a spatial pre-cue (Experiment 1) and 5-month-old infants exhibited enhanced memory for the cued item when the cue was relative motion (Experiment 2). These results demonstrate for the first time that infants younger than 6 months can encode information in VSTM about individual items in multiple-object arrays, and that attention-directing cues influence both perceptual and VSTM encoding of stimuli in infants as in adults.

  17. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    PubMed

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  18. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    PubMed

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  19. How Ants Use Vision When Homing Backward.

    PubMed

    Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine

    2017-02-06

    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

Top