Sample records for external visual cues

  1. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    ERIC Educational Resources Information Center

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  2. Direction of attentional focus in biofeedback treatment for /r/ misarticulation.

    PubMed

    McAllister Byun, Tara; Swartz, Michelle T; Halpin, Peter F; Szeredi, Daniel; Maas, Edwin

    2016-07-01

    Maintaining an external direction of focus during practice is reported to facilitate acquisition of non-speech motor skills, but it is not known whether these findings also apply to treatment for speech errors. This question has particular relevance for treatment incorporating visual biofeedback, where clinician cueing can direct the learner's attention either internally (i.e., to the movements of the articulators) or externally (i.e., to the visual biofeedback display). This study addressed two objectives. First, it aimed to use single-subject experimental methods to collect additional evidence regarding the efficacy of visual-acoustic biofeedback treatment for children with /r/ misarticulation. Second, it compared the efficacy of this biofeedback intervention under two cueing conditions. In the external focus (EF) condition, participants' attention was directed exclusively to the external biofeedback display. In the internal focus (IF) condition, participants viewed a biofeedback display, but they also received articulatory cues encouraging an internal direction of attentional focus. Nine school-aged children were pseudo-randomly assigned to receive either IF or EF cues during 8 weeks of visual-acoustic biofeedback intervention. Accuracy in /r/ production at the word level was probed in three to five pre-treatment baseline sessions and in three post-treatment maintenance sessions. Outcomes were assessed using visual inspection and calculation of effect sizes for individual treatment trajectories. In addition, a mixed logistic model was used to examine across-subjects effects including phase (pre/post-treatment), /r/ variant (treated/untreated), and focus cue condition (internal/external). Six out of nine participants showed sustained improvement on at least one treated /r/ variant; these six participants were evenly divided across EF and IF treatment groups. Regression results indicated that /r/ productions were significantly more likely to be rated accurate post- than pre-treatment. Internal versus external direction of focus cues was not a significant predictor of accuracy, nor did it interact significantly with other predictors. The results are consistent with previous literature reporting that visual-acoustic biofeedback can produce measurable treatment gains in children who have not responded to previous intervention. These findings are also in keeping with previous research suggesting that biofeedback may be sufficient to establish an external attentional focus, independent of verbal cues provided. The finding that explicit articulator placement cues were not necessary for progress in treatment has implications for intervention practices for speech-sound disorders in children. © 2016 Royal College of Speech and Language Therapists.

  3. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson's disease.

    PubMed

    Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A

    2016-06-01

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.

  4. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    PubMed

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  5. Stimulus onset predictability modulates proactive action control in a Go/No-go task

    PubMed Central

    Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco

    2015-01-01

    The aim of the study was to evaluate whether the presence/absence of visual cues specifying the onset of an upcoming, action-related stimulus modulates pre-stimulus brain activity, associated with the proactive control of goal-directed actions. To this aim we asked 12 subjects to perform an equal probability Go/No-go task with four stimulus configurations in two conditions: (1) uncued, i.e., without any external information about the timing of stimulus onset; and (2) cued, i.e., with external visual cues providing precise information about the timing of stimulus onset. During task both behavioral performance and event-related potentials (ERPs) were recorded. Behavioral results showed faster response times in the cued than uncued condition, confirming existing literature. ERPs showed novel results in the proactive control stage, that started about 1 s before the motor response. We observed a slow rising prefrontal positive activity, more pronounced in the cued than the uncued condition. Further, also pre-stimulus activity of premotor areas was larger in cued than uncued condition. In the post-stimulus period, the P3 amplitude was enhanced when the time of stimulus onset was externally driven, confirming that external cueing enhances processing of stimulus evaluation and response monitoring. Our results suggest that different pre-stimulus processing come into play in the two conditions. We hypothesize that the large prefrontal and premotor activities recorded with external visual cues index the monitoring of the external stimuli in order to finely regulate the action. PMID:25964751

  6. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    PubMed

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  7. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    PubMed Central

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444

  8. Analysis of Parallel and Transverse Visual Cues on the Gait of Individuals with Idiopathic Parkinson's Disease

    ERIC Educational Resources Information Center

    de Melo Roiz, Roberta; Azevedo Cacho, Enio Walker; Cliquet, Alberto, Jr.; Barasnevicius Quagliato, Elizabeth Maria Aparecida

    2011-01-01

    Idiopathic Parkinson's disease (IPD) has been defined as a chronic progressive neurological disorder with characteristics that generate changes in gait pattern. Several studies have reported that appropriate external influences, such as visual or auditory cues may improve the gait pattern of patients with IPD. Therefore, the objective of this…

  9. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  10. Visual Cues of Motion That Trigger Animacy Perception at Birth: The Case of Self-Propulsion

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Lunghi, Marco; Simion, Francesca; Vallortigara, Giorgio

    2017-01-01

    Self-propelled motion is a powerful cue that conveys information that an object is animate. In this case, animate refers to an entity's capacity to initiate motion without an applied external force. Sensitivity to this motion cue is present in infants that are a few months old, but whether this sensitivity is experience-dependent or is already…

  11. Investigation of outside visual cues required for low speed and hover

    NASA Technical Reports Server (NTRS)

    Hoh, R. H.

    1985-01-01

    Knowledge of the visual cues required in the performance of stabilized hover in VTOL aircraft is a prerequisite for the development of both cockpit displays and ground-based simulation systems. Attention is presently given to the viability of experimental test flight techniques as the bases for the identification of essential external cues in aggressive and precise low speed and hovering tasks. The analysis and flight test program conducted employed a helicopter and a pilot wearing lenses that could be electronically fogged, where the primary variables were field-of-view, large object 'macrotexture', and fine detail 'microtexture', in six different fields-of-view. Fundamental metrics are proposed for the quantification of the visual field, to allow comparisons between tests, simulations, and aircraft displays.

  12. Automaticity of phasic alertness: Evidence for a three-component model of visual cueing.

    PubMed

    Lin, Zhicheng; Lu, Zhong-Lin

    2016-10-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue-double cue-is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue-single cue-that is being mixed (80 % vs. 50 % valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, then, top-down influences-in the form of contextual relevance and cue awareness-can have opposite influences on the cueing effect from the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention-orienting, alerting, and inhibition-to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital.

  13. Automaticity of phasic alertness: evidence for a three-component model of visual cueing

    PubMed Central

    Lin, Zhicheng; Lu, Zhong-Lin

    2017-01-01

    The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue—double cue—is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue—single cue—that is being mixed (80% vs. 50% valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, top-down influences—in the form of contextual relevance and cue awareness—can have opposite influences on the cueing effect by the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention—orienting, alerting, and inhibition—to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital. PMID:27173487

  14. Effect of Visual Cues on the Resolution of Perceptual Ambiguity in Parkinson’s Disease and Normal Aging

    PubMed Central

    Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A.; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice

    2017-01-01

    Parkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. PMID:25765890

  15. Helicopter flight simulation motion platform requirements

    NASA Astrophysics Data System (ADS)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  16. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease.

    PubMed

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.

  17. The moderating role of food cue sensitivity in the behavioral response of children to their neighborhood food environment: a cross-sectional study.

    PubMed

    Paquet, Catherine; de Montigny, Luc; Labban, Alice; Buckeridge, David; Ma, Yu; Arora, Narendra; Dubé, Laurette

    2017-07-05

    Neighborhood food cues have been inconsistently related to residents' health, possibly due to variations in residents' sensitivity to such cues. This study sought to investigate the degree to which children's predisposition to eat upon exposure to food environment and food cues (external eating), could explain differences in strength of associations between their food consumption and the type of food outlets and marketing strategies present in their neighborhood. Data were obtained from 616 6-12 y.o. children recruited into a population-based cross-sectional study in which food consumption was measured through a 24-h food recall and responsiveness to food cues measured using the external eating scale. The proportion of food retailers within 3 km of residence considered as "healthful" was calculated using a Geographical Information System. Neighborhood exposure to food marketing strategies (displays, discount frequency, variety, and price) for vegetables and soft drinks were derived from a geocoded digital marketing database. Adjusted mixed models with spatial covariance tested interaction effects of food environment indicators and external eating on food consumption. In children with higher external eating scores, healthful food consumption was more positively related to vegetable displays, and more negatively to the display and variety of soft drinks. No interactions were observed for unhealthful food consumption and no main effects of food environment indicators were found on food consumption. Children differ in their responsiveness to marketing-related visual food cues on the basis of their external eating phenotype. Strategies aiming to increase the promotion of healthful relative to unhealthful food products in stores may be particularly beneficial for children identified as being more responsive to food cues.

  18. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson’s Disease

    PubMed Central

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862

  19. The Influence of Audio-Visual Cueing (Traffic Light) on Dual Task Walking in Healthy Older Adults and Older Adults with Balance Impairments.

    PubMed

    Kaewkaen, Kitchana; Wongsamud, Phongphat; Ngaothanyaphat, Jiratchaya; Supawarapong, Papawarin; Uthama, Suraphong; Ruengsirarak, Worasak; Chanabun, Suthin; Kaewkaen, Pratchaya

    2018-02-01

    The walking gait of older adults with balance impairment is affected by dual tasking. Several studies have shown that external cues can stimulate improvement in older adults' performance. There is, however, no current evidence to support the usefulness of external cues, such as audio-visual cueing, in dual task walking in older adults. Thus, the aim of this study was to investigate the influence of an audio-visual cue (simulated traffic light) on dual task walking in healthy older adults and in older adults with balance impairments. A two-way repeated measures study was conducted on 14 healthy older adults and 14 older adults with balance impairment, who were recruited from the community in Chiang Rai, Thailand. Their walking performance was assessed using a four-metre walking test at their preferred gait speed and while walking under two further gait conditions, in randomised order: dual task walking and dual task walking with a simulated traffic light. Each participant was tested individually, with the testing taking between 15 and 20 minutes to perform, including two-minute rest periods between walking conditions. Two Kinect cameras recorded the spatio-temporal parameters using MFU gait analysis software. Each participant was tested for each condition twice. The mean parameters for each condition were analysed using a two-way repeated measures analysis of variance (ANOVA) with participant group and gait condition as factors. There was no significant between-group effect for walking speed, stride length and cadence. There were also no significant effects between gait condition and stride length or cadence. However, the effect between gait condition and walking speed was found to be significant [F(1.557, 40.485) = 4.568, P = 0.024, [Formula: see text

  20. Shuttle vehicle and mission simulation requirements report, volume 1

    NASA Technical Reports Server (NTRS)

    Burke, J. F.

    1972-01-01

    The requirements for the space shuttle vehicle and mission simulation are developed to analyze the systems, mission, operations, and interfaces. The requirements are developed according to the following subject areas: (1) mission envelope, (2) orbit flight dynamics, (3) shuttle vehicle systems, (4) external interfaces, (5) crew procedures, (6) crew station, (7) visual cues, and (8) aural cues. Line drawings and diagrams of the space shuttle are included to explain the various systems and components.

  1. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content.

    PubMed

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter-and facilitation by a matching target-were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed.

  2. Resource-sharing between internal maintenance and external selection modulates attentional capture by working memory content

    PubMed Central

    Kiyonaga, Anastasia; Egner, Tobias

    2014-01-01

    It is unclear why and under what circumstances working memory (WM) and attention interact. Here, we apply the logic of the time-based resource-sharing (TBRS) model of WM (e.g., Barrouillet et al., 2004) to explore the mixed findings of a separate, but related, literature that studies the guidance of visual attention by WM contents. Specifically, we hypothesize that the linkage between WM representations and visual attention is governed by a time-shared cognitive resource that alternately refreshes internal (WM) and selects external (visual attention) information. If this were the case, WM content should guide visual attention (involuntarily), but only when there is time for it to be refreshed in an internal focus of attention. To provide an initial test for this hypothesis, we examined whether the amount of unoccupied time during a WM delay could impact the magnitude of attentional capture by WM contents. Participants were presented with a series of visual search trials while they maintained a WM cue for a delayed-recognition test. WM cues could coincide with the search target, a distracter, or neither. We varied both the number of searches to be performed, and the amount of available time to perform them. Slowing of visual search by a WM matching distracter—and facilitation by a matching target—were curtailed when the delay was filled with fast-paced (refreshing-preventing) search trials, as was subsequent memory probe accuracy. WM content may, therefore, only capture visual attention when it can be refreshed, suggesting that internal (WM) and external attention demands reciprocally impact one another because they share a limited resource. The TBRS rationale can thus be applied in a novel context to explain why WM contents capture attention, and under what conditions that effect should be observed. PMID:25221499

  3. Common mechanisms of spatial attention in memory and perception: a tactile dual-task study.

    PubMed

    Katus, Tobias; Andersen, Søren K; Müller, Matthias M

    2014-03-01

    Orienting attention to locations in mnemonic representations engages processes that functionally and anatomically overlap the neural circuitry guiding prospective shifts of spatial attention. The attention-based rehearsal account predicts that the requirement to withdraw attention from a memorized location impairs memory accuracy. In a dual-task study, we simultaneously presented retro-cues and pre-cues to guide spatial attention in short-term memory (STM) and perception, respectively. The spatial direction of each cue was independent of the other. The locations indicated by the combined cues could be compatible (same hand) or incompatible (opposite hands). Incompatible directional cues decreased lateralized activity in brain potentials evoked by visual cues, indicating interference in the generation of prospective attention shifts. The detection of external stimuli at the prospectively cued location was impaired when the memorized location was part of the perceptually ignored hand. The disruption of attention-based rehearsal by means of incompatible pre-cues reduced memory accuracy and affected encoding of tactile test stimuli at the retrospectively cued hand. These findings highlight the functional significance of spatial attention for spatial STM. The bidirectional interactions between both tasks demonstrate that spatial attention is a shared neural resource of a capacity-limited system that regulates information processing in internal and external stimulus representations.

  4. Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach.

    PubMed

    Byrne, Patrick A; Crawford, J Douglas

    2010-06-01

    It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.

  5. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  6. Effectiveness of external cues to facilitate task performance in people with neurological disorders: a systematic review and meta-analysis.

    PubMed

    Harrison, Stephanie L; Laver, Kate E; Ninnis, Kayla; Rowett, Cherie; Lannin, Natasha A; Crotty, Maria

    2018-03-09

    To examine in people with neurological disorders, which method/s of providing external cues to improve task performance are most effective. Medline, EMBASE, and PsycINFO were systematically searched. Two reviewers independently screened, extracted data, and assessed the quality of the evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE). Twenty six studies were included. Studies examined a wide-range of cues including visual, tactile, auditory, verbal, and multi-component cues. Cueing (any type) improved walking speed when comparing cues to no cues (mean difference (95% confidence interval): 0.08 m/s (0.06-0.10), I 2  = 68%, low quality of evidence). Remaining evidence was analysed narratively; evidence that cueing improves activity-related outcomes was inconsistent and rated as very low quality. It was not possible to determine which form of cueing may be more effective than others. Providing cues to encourage successful task performance is a core component of rehabilitation, however there is limited evidence on the type of cueing or which tasks benefit most from external cueing. Low-quality evidence suggests there may be a beneficial effect of cueing (any type) on walking speed. Sufficiently powered randomised controlled trials are needed to inform therapists of the most effective cueing strategies to improve activity performance in populations with a neurological disorder. Implications for rehabilitation Providing cues is a core component of rehabilitation and may improve successful task performance and activities in people with neurological conditions including stroke, Parkinson's disease, Alzheimer's disease, traumatic brain injury, and multiple sclerosis, but evidence is limited for most neurological conditions with much research focusing on stroke and Parkinson's disease. Therapists should consider using a range of different types of cues depending on the aims of treatment and the neurological condition. There is currently insufficient evidence to suggest one form of cueing is superior to other forms. Therapists should appreciate that responding optimally to cues may take many sessions to have an effect on activities such as walking. Further studies should be conducted over a longer timeframe to examine the effects of different types of cues towards task performance and activities in people with neurological conditions.

  7. Encoding strategies in self-initiated visual working memory.

    PubMed

    Magen, Hagit; Berger-Mandelbaum, Anat

    2018-06-11

    During a typical day, visual working memory (VWM) is recruited to temporarily maintain visual information. Although individuals often memorize external visual information provided to them, on many other occasions they memorize information they have constructed themselves. The latter aspect of memory, which we term self-initiated WM, is prevalent in everyday behavior but has largely been overlooked in the research literature. In the present study we employed a modified change detection task in which participants constructed the displays they memorized, by selecting three or four abstract shapes or real-world objects and placing them at three or four locations in a circular display of eight locations. Half of the trials included identical targets that participants could select. The results demonstrated consistent strategies across participants. To enhance memory performance, participants reported selecting abstract shapes they could verbalize, but they preferred real-world objects with distinct visual features. Furthermore, participants constructed structured memory displays, most frequently based on the Gestalt organization cue of symmetry, and to a lesser extent on cues of proximity and similarity. When identical items were selected, participants mostly placed them in close proximity, demonstrating the construction of configurations based on the interaction between several Gestalt cues. The present results are consistent with recent findings in VWM, showing that memory for visual displays based on Gestalt organization cues can benefit VWM, suggesting that individuals have access to metacognitive knowledge on the benefit of structure in VWM. More generally, this study demonstrates how individuals interact with the world by actively structuring their surroundings to enhance performance.

  8. Direction of Attentional Focus in Biofeedback Treatment for /R/ Misarticulation

    ERIC Educational Resources Information Center

    McAllister Byun, Tara; Swartz, Michelle T.; Halpin, Peter F.; Szeredi, Daniel; Maas, Edwin

    2016-01-01

    Background: Maintaining an external direction of focus during practice is reported to facilitate acquisition of non-speech motor skills, but it is not known whether these findings also apply to treatment for speech errors. This question has particular relevance for treatment incorporating visual biofeedback, where clinician cueing can direct the…

  9. The use of visual cues for vehicle control and navigation

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Battiste, Vernol

    1991-01-01

    At least three levels of control are required to operate most vehicles: (1) inner-loop control to counteract the momentary effects of disturbances on vehicle position; (2) intermittent maneuvers to avoid obstacles, and (3) outer-loop control to maintain a planned route. Operators monitor dynamic optical relationships in their immediate surroundings to estimate momentary changes in forward, lateral, and vertical position, rates of change in speed and direction of motion, and distance from obstacles. The process of searching the external scene to find landmarks (for navigation) is intermittent and deliberate, while monitoring and responding to subtle changes in the visual scene (for vehicle control) is relatively continuous and 'automatic'. However, since operators may perform both tasks simultaneously, the dynamic optical cues available for a vehicle control task may be determined by the operator's direction of gaze for wayfinding. An attempt to relate the visual processes involved in vehicle control and wayfinding is presented. The frames of reference and information used by different operators (e.g., automobile drivers, airline pilots, and helicopter pilots) are reviewed with particular emphasis on the special problems encountered by helicopter pilots flying nap of the earth (NOE). The goal of this overview is to describe the context within which different vehicle control tasks are performed and to suggest ways in which the use of visual cues for geographical orientation might influence visually guided control activities.

  10. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    NASA Astrophysics Data System (ADS)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  11. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Internal and external spatial attention examined with lateralized EEG power spectra.

    PubMed

    Van der Lubbe, Rob H J; Bundt, Carsten; Abrahamse, Elger L

    2014-10-02

    Several authors argued that retrieval of an item from visual short term memory (internal spatial attention) and focusing attention on an externally presented item (external spatial attention) are similar. Part of the neuroimaging support for this view may be due to the employed experimental procedures. Furthermore, as internal spatial attention may have a more induced than evoked nature some effects may not have been visible in event related analyses of the electroencephalogram (EEG), which limits the possibility to demonstrate differences. In the current study, a colored frame cued which stimulus, one out of four presented in separate quadrants, required a response, which depended on the form of the cued stimulus (circle or square). Importantly, the frame occurred either before (precue), simultaneously with (simultaneous cue), or after the stimuli (postcue). The precue and simultaneous cue condition both concern external attention, while the postcue condition implies the involvement of internal spatial attention. Event-related lateralizations (ERLs), reflecting evoked effects, and lateralized power spectra (LPS), reflecting both evoked and induced effects, were determined. ERLs revealed a posterior contralateral negativity (PCN) only in the precue condition. LPS analyses on the raw EEG showed early increased contralateral theta power at posterior sites and later increased ipsilateral alpha power at occipito-temporal sites in all cue conditions. Responses were faster when the internally or externally attended location corresponded with the required response side than when not. These findings provide further support for the view that internal and external spatial attention share their underlying mechanism. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Targeting dopa-sensitive and dopa-resistant gait dysfunction in Parkinson's disease: selective responses to internal and external cues.

    PubMed

    Rochester, Lynn; Baker, Katherine; Nieuwboer, Alice; Burn, David

    2011-02-15

    Independence of certain gait characteristics from dopamine replacement therapies highlights its complex pathophysiology in Parkinson's disease (PD). We explored the effect of two different cue strategies on gait characteristics in relation to their response to dopaminergic medications. Fifty people with PD (age 69.22 ± 6.6 years) were studied. Participants walked with and without cues presented in a randomized order. Cue strategies were: (1) internal cue (attention to increase step length) and (2) external cue (auditory cue with instruction to take large step to the beat). Testing was carried out two times at home (on and off medication). Gait was measured using a Stride Analyzer (B&L Engineering). Gait outcomes were walking speed, stride length, step frequency, and coefficient of variation (CV) of stride time and double limb support duration (DLS). Walking speed, stride length, and stride time CV improved on dopaminergic medications, whereas step frequency and DLS CV did not. Internal and external cues increased stride time and walking speed (on and off dopaminergic medications). Only the external cue significantly improved stride time CV and DLS CV, whereas the internal cue had no effect (on and off dopaminergic medications). Internal and external cues selectively modify gait characteristics in relation to the type of gait disturbance and its dopa-responsiveness. Although internal (attention) and external cues target dopaminergic gait dysfunction (stride length), only external cues target stride to stride fluctuations in gait. Despite an overlap with dopaminergic pathways, external cues may effectively address nondopaminergic gait dysfunction and potentially increase mobility and reduce gait instability and falls. Copyright © 2010 Movement Disorder Society.

  14. The role of reverberation-related binaural cues in the externalization of speech.

    PubMed

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-08-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.

  15. An Experimental Study of the Effect of Out-of-the-Window Cues on Training Novice Pilots on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Khan, M. Javed; Rossi, Marcia; Heath, Bruce; Ali, Syed F.; Ward, Marcus

    2006-01-01

    The effects of out-of-the-window cues on learning a straight-in landing approach and a level 360deg turn by novice pilots on a flight simulator have been investigated. The treatments consisted of training with and without visual cues as well as density of visual cues. The performance of the participants was then evaluated through similar but more challenging tasks. It was observed that the participants in the landing study who trained with visual cues performed poorly than those who trained without the cues. However the performance of those who trained with a faded-cues sequence performed slightly better than those who trained without visual cues. In the level turn study it was observed that those who trained with the visual cues performed better than those who trained without visual cues. The study also showed that those participants who trained with a lower density of cues performed better than those who trained with a higher density of visual cues.

  16. Electrophysiological indices of visual food cue-reactivity. Differences in obese, overweight and normal weight women.

    PubMed

    Hume, David John; Howells, Fleur Margaret; Rauch, H G Laurie; Kroff, Jacolene; Lambert, Estelle Victoria

    2015-02-01

    Heightened food cue-reactivity in overweight and obese individuals has been related to aberrant functioning of neural circuitry implicated in motivational behaviours and reward-seeking. Here we explore the neurophysiology of visual food cue-reactivity in overweight and obese women, as compared with normal weight women, by assessing differences in cortical arousal and attentional processing elicited by food and neutral image inserts in a Stroop task with record of EEG spectral band power and ERP responses. Results show excess right frontal (F8) and left central (C3) relative beta band activity in overweight women during food task performance (indicative of pronounced early visual cue-reactivity) and blunted prefrontal (Fp1 and Fp2) theta band activity in obese women during office task performance (suggestive of executive dysfunction). Moreover, as compared to normal weight women, food images elicited greater right parietal (P4) ERP P200 amplitude in overweight women (denoting pronounced early attentional processing) and shorter right parietal (P4) ERP P300 latency in obese women (signifying enhanced and efficient maintained attentional processing). Differential measures of cortical arousal and attentional processing showed significant correlations with self-reported eating behaviour and body shape dissatisfaction, as well as with objectively assessed percent fat mass. The findings of the present study suggest that heightened food cue-reactivity can be neurophysiologically measured, that different neural circuits are implicated in the pathogenesis of overweight and obesity, and that EEG techniques may serve useful in the identification of endophenotypic markers associated with an increased risk of externally mediated food consumption. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Cognitive-behavioral and electrophysiological evidence of the affective consequences of ignoring stimulus representations in working memory.

    PubMed

    De Vito, David; Ferrey, Anne E; Fenske, Mark J; Al-Aidroos, Naseem

    2018-06-01

    Ignoring visual stimuli in the external environment leads to decreased liking of those items, a phenomenon attributed to the affective consequences of attentional inhibition. Here we investigated the generality of this "distractor devaluation" phenomenon by asking whether ignoring stimuli represented internally within visual working memory has the same affective consequences. In two experiments we presented participants with two or three visual stimuli and then, after the stimuli were no longer visible, provided an attentional cue indicating which item in memory was the target they would have to later recall, and which were task-irrelevant distractors. Participants subsequently judged how much they liked these stimuli. Previously-ignored distractors were consistently rated less favorably than targets, replicating prior findings of distractor devaluation. To gain converging evidence, in Experiment 2, we also examined the electrophysiological processes associated with devaluation by measuring individual differences in attention (N2pc) and working memory (CDA) event-related potentials following the attention cue. Larger amplitude of an N2pc-like component was associated with greater devaluation, suggesting that individuals displaying more effective selection of memory targets-an act aided by distractor inhibition-displayed greater levels of distractor devaluation. Individuals showing a larger post-cue CDA amplitude (but not pre-cue CDA amplitude) also showed greater distractor devaluation, supporting prior evidence that visual working-memory resources have a functional role in effecting devaluation. Together, these findings demonstrate that ignoring working-memory representations has affective consequences, and adds to the growing evidence that the contribution of selective-attention mechanisms to a wide range of human thoughts and behaviors leads to devaluation.

  18. [Visual cuing effect for haptic angle judgment].

    PubMed

    Era, Ataru; Yokosawa, Kazuhiko

    2009-08-01

    We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.

  19. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  20. Context cue-dependent saccadic adaptation in rhesus macaques cannot be elicited using color

    PubMed Central

    Smalianchuk, Ivan; Khanna, Sanjeev B.; Smith, Matthew A.; Gandhi, Neeraj J.

    2015-01-01

    When the head does not move, rapid movements of the eyes called saccades are used to redirect the line of sight. Saccades are defined by a series of metrical and kinematic (evolution of a movement as a function of time) relationships. For example, the amplitude of a saccade made from one visual target to another is roughly 90% of the distance between the initial fixation point (T0) and the peripheral target (T1). However, this stereotypical relationship between saccade amplitude and initial retinal error (T1-T0) may be altered, either increased or decreased, by surreptitiously displacing a visual target during an ongoing saccade. This form of motor learning (called saccadic adaptation) has been described in both humans and monkeys. Recent experiments in humans and monkeys have suggested that internal (proprioceptive) and external (target shape, color, and/or motion) cues may be used to produce context-dependent adaptation. We tested the hypothesis that an external contextual cue (target color) could be used to evoke differential gain (actual saccade/initial retinal error) states in rhesus monkeys. We did not observe differential gain states correlated with target color regardless of whether targets were displaced along the same vector as the primary saccade or perpendicular to it. Furthermore, this observation held true regardless of whether adaptation trials using various colors and intrasaccade target displacements were randomly intermixed or presented in short or long blocks of trials. These results are consistent with hypotheses that state that color cannot be used as a contextual cue and are interpreted in light of previous studies of saccadic adaptation in both humans and monkeys. PMID:25995353

  1. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    NASA Astrophysics Data System (ADS)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  2. External cues challenging the internal appetite control system-Overview and practical implications.

    PubMed

    Bilman, Els; van Kleef, Ellen; van Trijp, Hans

    2017-09-02

    Inadequate regulation of food intake plays an important role in the development of overweight and obesity, and is under the influence of both the internal appetite control system and external environmental cues. Especially in environments where food is overly available, external cues seem to override and/or undermine internal signals, which put severe challenges on the accurate regulation of food intake. By structuring these external cues around five different phases in the food consumption process this paper aims to provide an overview of the wide range of external cues that potentially facilitate or hamper internal signals and with that influence food intake. For each of the five phases of the food consumption process, meal initiation, meal planning, consumption phase, end of eating episode and time till next meal, the most relevant internal signals are discussed and it is explained how specific external cues exert their influence.

  3. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    PubMed

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  4. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    PubMed

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  5. Functions of external cues in prospective memory.

    DOT National Transportation Integrated Search

    1995-02-01

    A simulation of an air traffic control task was the setting for an investigation of the functions of external cues in prospective memory. External cues can support the triggering of an action or memory for the content of the action. : We focused on m...

  6. Absence of Visual Input Results in the Disruption of Grid Cell Firing in the Mouse.

    PubMed

    Chen, Guifen; Manson, Daniel; Cacucci, Francesca; Wills, Thomas Joseph

    2016-09-12

    Grid cells are spatially modulated neurons within the medial entorhinal cortex whose firing fields are arranged at the vertices of tessellating equilateral triangles [1]. The exquisite periodicity of their firing has led to the suggestion that they represent a path integration signal, tracking the organism's position by integrating speed and direction of movement [2-10]. External sensory inputs are required to reset any errors that the path integrator would inevitably accumulate. Here we probe the nature of the external sensory inputs required to sustain grid firing, by recording grid cells as mice explore familiar environments in complete darkness. The absence of visual cues results in a significant disruption of grid cell firing patterns, even when the quality of the directional information provided by head direction cells is largely preserved. Darkness alters the expression of velocity signaling within the entorhinal cortex, with changes evident in grid cell firing rate and the local field potential theta frequency. Short-term (<1.5 s) spike timing relationships between grid cell pairs are preserved in the dark, indicating that network patterns of excitatory and inhibitory coupling between grid cells exist independently of visual input and of spatially periodic firing. However, we find no evidence of preserved hexagonal symmetry in the spatial firing of single grid cells at comparable short timescales. Taken together, these results demonstrate that visual input is required to sustain grid cell periodicity and stability in mice and suggest that grid cells in mice cannot perform accurate path integration in the absence of reliable visual cues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    PubMed

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  8. Recovering faces from memory: the distracting influence of external facial features.

    PubMed

    Frowd, Charlie D; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H; Hancock, Peter J B

    2012-06-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried out by witnesses and victims of crime, the role of external features (hair, ears, and neck) is less clear, although research does suggest their involvement. Here, over three experiments, we investigate the impact of external features for recovering facial memories using a modern, recognition-based composite system, EvoFIT. Participant-constructors inspected an unfamiliar target face and, one day later, repeatedly selected items from arrays of whole faces, with "breeding," to "evolve" a composite with EvoFIT; further participants (evaluators) named the resulting composites. In Experiment 1, the important internal-features (eyes, brows, nose, and mouth) were constructed more identifiably when the visual presence of external features was decreased by Gaussian blur during construction: higher blur yielded more identifiable internal-features. In Experiment 2, increasing the visible extent of external features (to match the target's) in the presented face-arrays also improved internal-features quality, although less so than when external features were masked throughout construction. Experiment 3 demonstrated that masking external-features promoted substantially more identifiable images than using the previous method of blurring external-features. Overall, the research indicates that external features are a distractive rather than a beneficial cue for face construction; the results also provide a much better method to construct composites, one that should dramatically increase identification of offenders.

  9. Effects of Presentation Type and Visual Control in Numerosity Discrimination: Implications for Number Processing?

    PubMed Central

    Smets, Karolien; Moors, Pieter; Reynvoet, Bert

    2016-01-01

    Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967

  10. Visual Cues, Verbal Cues and Child Development

    ERIC Educational Resources Information Center

    Valentini, Nadia

    2004-01-01

    In this article, the author discusses two strategies--visual cues (modeling) and verbal cues (short, accurate phrases) which are related to teaching motor skills in maximizing learning in physical education classes. Both visual and verbal cues are strong influences in facilitating and promoting day-to-day learning. Both strategies reinforce…

  11. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  12. Saccade frequency response to visual cues during gait in Parkinson's disease: the selective role of attention.

    PubMed

    Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn

    2018-04-01

    Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Circadian timed episodic-like memory - a bee knows what to do when, and also where.

    PubMed

    Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu

    2007-10-01

    This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.

  14. Impact of External Cue Validity on Driving Performance in Parkinson's Disease

    PubMed Central

    Scally, Karen; Charlton, Judith L.; Iansek, Robert; Bradshaw, John L.; Moss, Simon; Georgiou-Karistianis, Nellie

    2011-01-01

    This study sought to investigate the impact of external cue validity on simulated driving performance in 19 Parkinson's disease (PD) patients and 19 healthy age-matched controls. Braking points and distance between deceleration point and braking point were analysed for red traffic signals preceded either by Valid Cues (correctly predicting signal), Invalid Cues (incorrectly predicting signal), and No Cues. Results showed that PD drivers braked significantly later and travelled significantly further between deceleration and braking points compared with controls for Invalid and No-Cue conditions. No significant group differences were observed for driving performance in response to Valid Cues. The benefit of Valid Cues relative to Invalid Cues and No Cues was significantly greater for PD drivers compared with controls. Trail Making Test (B-A) scores correlated with driving performance for PDs only. These results highlight the importance of external cues and higher cognitive functioning for driving performance in mild to moderate PD. PMID:21789275

  15. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    PubMed

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  16. Western Diet and the Weakening of the Interoceptive Stimulus Control of Appetitive Behavior

    PubMed Central

    Sample, Camille H.; Jones, Sabrina; Hargrave, Sara L.; Jarrard, Leonard E.; Davidson, Terry L.

    2017-01-01

    In obesogenic environments food-related external cues are thought to overwhelm internal cues that normally regulate energy intake. We investigated how this shift from external to internal stimulus control might occur. Experiment 1 showed that rats could use stimuli arising from 0 and 4h food deprivation to predict sucrose delivery. Experiment 2 then examined (a) the ability of these deprivation cues to compete with external cues and (b) how consuming a Western-style diet (WD) affects that competition. Rats were trained to use both their deprivation cues and external cues as compound discriminative stimuli. Half of the rats were then placed on WD while the others remained on chow, and external cues were removed to assess learning about deprivation state cues. When tested with external cues removed, chow-fed rats continued to discriminate using only deprivation cues, while WD-fed rats did not. The WD-fed group performed similarly to control groups trained with a noncontingent relationship between deprivation cues and sucrose reinforcement. Previous studies provided evidence that discrimination based on interoceptive deprivation cues depends on the hippocampus and that WD intake could interfere with hippocampal functioning. A third experiment assessed the effects of neurotoxic hippocampal lesions on weight gain and on sensitivity to the appetite-suppressing effects of the satiety hormone cholecystokinin (CCK). Relative to controls, hippocampal-lesioned rats gained more weight and showed reduced sensitivity to a 1.0 ug but not 2.0 or 4.0 ug CCK doses. These findings suggest that WD intake reduces utilization of interoceptive energy state signals to regulate appetitive behavior via a mechanism that involves the hippocampus. PMID:27312269

  17. Multimodal communication, mismatched messages and the effects of turbidity on the antipredator behavior of the Barton Springs salamander, Eurycea sosorum.

    PubMed

    Zabierek, Kristina C; Gabor, Caitlin R

    2016-09-01

    Prey may use multiple sensory channels to detect predators, whose cues may differ in altered sensory environments, such as turbid conditions. Depending on the environment, prey may use cues in an additive/complementary manner or in a compensatory manner. First, to determine whether the purely aquatic Barton Springs salamander, Eurycea sosorum, show an antipredator response to visual cues, we examined their activity when exposed to either visual cues of a predatory fish (Lepomis cyanellus) or a non-predatory fish (Etheostoma lepidum). Salamanders decreased activity in response to predator visual cues only. Then, we examined the antipredator response of these salamanders to all matched and mismatched combinations of chemical and visual cues of the same predatory and non-predatory fish in clear and low turbidity conditions. Salamanders decreased activity in response to predator chemical cues matched with predator visual cues or mismatched with non-predator visual cues. Salamanders also increased latency to first move to predator chemical cues mismatched with non-predator visual cues. Salamanders decreased activity and increased latency to first move more in clear as opposed to turbid conditions in all treatment combinations. Our results indicate that salamanders under all conditions and treatments preferentially rely on chemical cues to determine antipredator behavior, although visual cues are potentially utilized in conjunction for latency to first move. Our results also have potential conservation implications, as decreased antipredator behavior was seen in turbid conditions. These results reveal complexity of antipredator behavior in response to multiple cues under different environmental conditions, which is especially important when considering endangered species. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Western-style diet impairs stimulus control by food deprivation state cues. Implications for obesogenic environments☆

    PubMed Central

    Sample, Camille H.; Martin, Ashley A.; Jones, Sabrina; Hargrave, Sara L.; Davidson, Terry L.

    2015-01-01

    In western and westernized societies, large portions of the population live in what are considered to be “obesogenic” environments. Among other things, obesogenic environments are characterized by a high prevalence of external cues that are associated with highly palatable, energy-dense foods. One prominent hypothesis suggests that these external cues become such powerful conditioned elicitors of appetitive and eating behavior that they overwhelm the internal, physiological mechanisms that serve to maintain energy balance. The present research investigated a learning mechanism that may underlie this loss of internal relative to external control. In Experiment 1, rats were provided with both auditory cues (external stimuli) and varying levels of food deprivation (internal stimuli) that they could use to solve a simple discrimination task. Despite having access to clearly discriminable external cues, we found that the deprivation cues gained substantial discriminative control over conditioned responding. Experiment 2 found that, compared to standard chow, maintenance on a “western-style” diet high in saturated fat and sugar weakened discriminative control by food deprivation cues, but did not impair learning when external cues were also trained as relevant discriminative signals for sucrose. Thus, eating a western-style diet contributed to a loss of internal control over appetitive behavior relative to external cues. We discuss how this relative loss of control by food deprivation signals may result from interference with hippocampal-dependent learning and memory processes, forming the basis of a vicious-cycle of excessive intake, body weight gain, and progressive cognitive decline that may begin very early in life. PMID:26002280

  19. Dual Learning Processes in Interactive Skill Acquisition

    ERIC Educational Resources Information Center

    Fu, Wai-Tat; Anderson, John R.

    2008-01-01

    Acquisition of interactive skills involves the use of internal and external cues. Experiment 1 showed that when actions were interdependent, learning was effective with and without external cues in the single-task condition but was effective only with the presence of external cues in the dual-task condition. In the dual-task condition, actions…

  20. Visual search and the aging brain: discerning the effects of age-related brain volume shrinkage on alertness, feature binding, and attentional control.

    PubMed

    Müller-Oehring, Eva M; Schulte, Tilman; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V

    2013-01-01

    Decline in visuospatial abilities with advancing age has been attributed to a demise of bottom-up and top-down functions involving sensory processing, selective attention, and executive control. These functions may be differentially affected by age-related volume shrinkage of subcortical and cortical nodes subserving the dorsal and ventral processing streams and the corpus callosum mediating interhemispheric information exchange. Fifty-five healthy adults (25-84 years) underwent structural MRI and performed a visual search task to test perceptual and attentional demands by combining feature-conjunction searches with "gestalt" grouping and attentional cueing paradigms. Poorer conjunction, but not feature, search performance was related to older age and volume shrinkage of nodes in the dorsolateral processing stream. When displays allowed perceptual grouping through distractor homogeneity, poorer conjunction-search performance correlated with smaller ventrolateral prefrontal cortical and callosal volumes. An alerting cue attenuated age effects on conjunction search, and the alertness benefit was associated with thalamic, callosal, and temporal cortex volumes. Our results indicate that older adults can capitalize on early parallel stages of visual information processing, whereas age-related limitations arise at later serial processing stages requiring self-guided selective attention and executive control. These limitations are explained in part by age-related brain volume shrinkage and can be mitigated by external cues.

  1. Visual cue-specific craving is diminished in stressed smokers.

    PubMed

    Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R

    2017-09-01

    Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.

  2. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    PubMed

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Perceptual upright: the relative effectiveness of dynamic and static images under different gravity States.

    PubMed

    Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R

    2011-01-01

    The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.

  4. The effect of contextual sound cues on visual fidelity perception.

    PubMed

    Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam

    2014-01-01

    Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.

  5. Auditory and visual cueing modulate cycling speed of older adults and persons with Parkinson's disease in a Virtual Cycling (V-Cycle) system.

    PubMed

    Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E

    2016-08-19

    Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.

  6. Effects of visual focus and gait speed on walking balance in the frontal plane.

    PubMed

    Goodworth, Adam; Perrone, Kathryn; Pillsbury, Mark; Yargeau, Michelle

    2015-08-01

    We investigated how head position and gait speed influenced frontal plane balance responses to external perturbations during gait. Thirteen healthy participants walked on a treadmill at three different gait speeds. Visual conditions included either focus downward on lower extremities and walking surface only or focus forward on a stationary scene with horizontal and vertical lines. The treadmill was positioned on a platform that was stationary (non-perturbed) or moving in a pattern that appeared random to the subjects (perturbed). In non-perturbed walking, medial-lateral upper body motion was very similar between visual conditions. However, in perturbed walking, there was significantly less body motion when focus was on the stationary visual scene, suggesting visual feedback of stationary vertical and horizontal cues are particularly important when balance is challenged. Sensitivity of body motion to perturbations was significantly decreased by increasing gait speed, suggesting that faster walking was less sensitive to frontal plane perturbations. Finally, our use of external perturbations supported the idea that certain differences in balance control mechanisms can only be detected in more challenging situations, which is an important consideration for approaches to investigating sensory contribution to balance during gait. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Obese adults have visual attention bias for food cue images: evidence for altered reward system function.

    PubMed

    Castellanos, E H; Charboneau, E; Dietrich, M S; Park, S; Bradley, B P; Mogg, K; Cowan, R L

    2009-09-01

    The major aim of this study was to investigate whether the motivational salience of food cues (as reflected by their attention-grabbing properties) differs between obese and normal-weight subjects in a manner consistent with altered reward system function in obesity. A total of 18 obese and 18 normal-weight, otherwise healthy, adult women between the ages of 18 and 35 participated in an eye-tracking paradigm in combination with a visual probe task. Eye movements and reaction time to food and non-food images were recorded during both fasted and fed conditions in a counterbalanced design. Eating behavior and hunger level were assessed by self-report measures. Obese individuals had higher scores than normal-weight individuals on self-report measures of responsiveness to external food cues and vulnerability to disruptions in control of eating behavior. Both obese and normal-weight individuals demonstrated increased gaze duration for food compared to non-food images in the fasted condition. In the fed condition, however, despite reduced hunger in both groups, obese individuals maintained the increased attention to food images, whereas normal-weight individuals had similar gaze duration for food and non-food images. Additionally, obese individuals had preferential orienting toward food images at the onset of each image. Obese and normal-weight individuals did not differ in reaction time measures in the fasted or fed condition. Food cue incentive salience is elevated equally in normal-weight and obese individuals during fasting. Obese individuals retain incentive salience for food cues despite feeding and decreased self-report of hunger. Sensitization to food cues in the environment and their dysregulation in obese individuals may play a role in the development and/or maintenance of obesity.

  8. Directed Forgetting and Directed Remembering in Visual Working Memory

    PubMed Central

    Williams, Melonie; Woodman, Geoffrey F.

    2013-01-01

    A defining characteristic of visual working memory is its limited capacity. This means that it is crucial to maintain only the most relevant information in visual working memory. However, empirical research is mixed as to whether it is possible to selectively maintain a subset of the information previously encoded into visual working memory. Here we examined the ability of subjects to use cues to either forget or remember a subset of the information already stored in visual working memory. In Experiment 1, participants were cued to either forget or remember one of two groups of colored squares during a change-detection task. We found that both types of cues aided performance in the visual working memory task, but that observers benefited more from a cue to remember than a cue to forget a subset of the objects. In Experiment 2, we show that the previous findings, which indicated that directed-forgetting cues are ineffective, were likely due to the presence of invalid cues that appear to cause observers to disregard such cues as unreliable. In Experiment 3, we recorded event-related potentials (ERPs) and show that an electrophysiological index of focused maintenance is elicited by cues that indicate which subset of information in visual working memory needs to be remembered, ruling out alternative explanations of the behavioral effects of retention-interval cues. The present findings demonstrate that observers can focus maintenance mechanisms on specific objects in visual working memory based on cues indicating future task relevance. PMID:22409182

  9. Interplay of Gravicentric, Egocentric, and Visual Cues About the Vertical in the Control of Arm Movement Direction.

    PubMed

    Bock, Otmar; Bury, Nils

    2018-03-01

    Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.

  10. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  11. Testing the influence of external and internal cues on smoking motivation using a community sample.

    PubMed

    Litvin, Erika B; Brandon, Thomas H

    2010-02-01

    Exposing smokers to either external cues (e.g., pictures of cigarettes) or internal cues (e.g., negative affect induction) can induce urge to smoke and other behavioral and physiological responses. However, little is known about whether the two types of cues interact when presented in close proximity, as is likely the case in the real word. Additionally, potential moderators of cue reactivity have rarely been examined. Finally, few cue-reactivity studies have used representative samples of smokers. In a randomized 2 x 2 crossed factorial between-subjects design, the current study tested the effects of a negative affect cue intended to produce anxiety (speech preparation task) and an external smoking cue on urge and behavioral reactivity in a community sample of adult smokers (N = 175), and whether trait impulsivity moderated the effects. Both types of cues produced main effects on urges to smoke, despite the speech task failing to increase anxiety significantly. The speech task increased smoking urge related to anticipation of negative affect relief, whereas the external smoking cues increased urges related to anticipation of pleasure; however, the cues did not interact. Impulsivity measures predicted urge and other smoking-related variables, but did not moderate cue-reactivity. Results suggest independent rather than synergistic effects of these contributors to smoking motivation. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  12. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  13. Temporal and peripheral extraction of contextual cues from scenes during visual search.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-02-01

    Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.

  14. Effects of visual and motion simulation cueing systems on pilot performance during takeoffs with engine failures

    NASA Technical Reports Server (NTRS)

    Parris, B. L.; Cook, A. M.

    1978-01-01

    Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.

  15. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  16. Motion/visual cueing requirements for vortex encounters during simulated transport visual approach and landing

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Bowles, R. L.

    1983-01-01

    This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.

  17. Media/Device Configurations for Platoon Leader Tactical Training

    DTIC Science & Technology

    1985-02-01

    munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual

  18. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  19. Exposure to visual cues of pathogen contagion changes preferences for masculinity and symmetry in opposite-sex faces.

    PubMed

    Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C

    2011-07-07

    Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident.

  20. Exposure to visual cues of pathogen contagion changes preferences for masculinity and symmetry in opposite-sex faces

    PubMed Central

    Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.

    2011-01-01

    Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident. PMID:21123269

  1. Dissociable Fronto-Operculum-Insula Control Signals for Anticipation and Detection of Inhibitory Sensory Cue.

    PubMed

    Cai, Weidong; Chen, Tianwen; Ide, Jaime S; Li, Chiang-Shan R; Menon, Vinod

    2017-08-01

    The ability to anticipate and detect behaviorally salient stimuli is important for virtually all adaptive behaviors, including inhibitory control that requires the withholding of prepotent responses when instructed by external cues. Although right fronto-operculum-insula (FOI), encompassing the anterior insular cortex (rAI) and inferior frontal cortex (rIFC), involvement in inhibitory control is well established, little is known about signaling mechanisms underlying their differential roles in detection and anticipation of salient inhibitory cues. Here we use 2 independent functional magnetic resonance imaging data sets to investigate dynamic causal interactions of the rAI and rIFC, with sensory cortex during detection and anticipation of inhibitory cues. Across 2 different experiments involving auditory and visual inhibitory cues, we demonstrate that primary sensory cortex has a stronger causal influence on rAI than on rIFC, suggesting a greater role for the rAI in detection of salient inhibitory cues. Crucially, a Bayesian prediction model of subjective trial-by-trial changes in inhibitory cue anticipation revealed that the strength of causal influences from rIFC to rAI increased significantly on trials in which participants had higher anticipation of inhibitory cues. Together, these results demonstrate the dissociable bottom-up and top-down roles of distinct FOI regions in detection and anticipation of behaviorally salient cues across multiple sensory modalities. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Davida Teller Award Lecture 2013: the importance of prediction and anticipation in the control of smooth pursuit eye movements.

    PubMed

    Kowler, Eileen; Aitkin, Cordelia D; Ross, Nicholas M; Santos, Elio M; Zhao, Min

    2014-05-16

    The ability of smooth pursuit eye movements to anticipate the future motion of targets has been known since the pioneering work of Dodge, Travis, and Fox (1930) and Westheimer (1954). This article reviews aspects of anticipatory smooth eye movements, focusing on the roles of the different internal or external cues that initiate anticipatory pursuit.We present new results showing that the anticipatory smooth eye movements evoked by different cues differ substantially, even when the cues are equivalent in the information conveyed about the direction of future target motion. Cues that convey an easily interpretable visualization of the motion path produce faster anticipatory smooth eye movements than the other cues tested, including symbols associated arbitrarily with the path, and the same target motion tested repeatedly over a block of trials. The differences among the cues may be understood within a common predictive framework in which the cues differ in the level of subjective certainty they provide about the future path. Pursuit may be driven by a combined signal in which immediate sensory motion, and the predictions about future motion generated by sets of cues, are weighted according to their respective levels of certainty. Anticipatory smooth eye movements, an overt indicator of expectations and predictions, may not be operating in isolation, but may be part of a global process in which the brain analyzes available cues, formulates predictions, and uses them to control perceptual, motor, and cognitive processes. © 2014 ARVO.

  3. The effect of visual context on manual localization of remembered targets

    NASA Technical Reports Server (NTRS)

    Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.

    1997-01-01

    This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.

  4. The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults

    PubMed Central

    Cortese, Bernadette M.; Uhde, Thomas W.; Brady, Kathleen T.; McClernon, F. Joseph; Yang, Qing X.; Collins, Heather R.; LeMatty, Todd; Hartwell, Karen J.

    2015-01-01

    Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor + picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multi-sensory, but not unisensory cues, was significantly related to participants’ level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. PMID:26475784

  5. The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults.

    PubMed

    Cortese, Bernadette M; Uhde, Thomas W; Brady, Kathleen T; McClernon, F Joseph; Yang, Qing X; Collins, Heather R; LeMatty, Todd; Hartwell, Karen J

    2015-12-30

    Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor+picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multisensory, but not unisensory cues, was significantly related to participants' level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Cue-Reactive Rationality, Visual Imagery and Volitional Control Predict Cue-Reactive Urge to Gamble in Poker-Machine Gamblers.

    PubMed

    Clark, Gavin I; Rock, Adam J; McKeith, Charles F A; Coventry, William L

    2017-09-01

    Poker-machine gamblers have been demonstrated to report increases in the urge to gamble following exposure to salient gambling cues. However, the processes which contribute to this urge to gamble remain to be understood. The present study aimed to investigate whether changes in the conscious experience of visual imagery, rationality and volitional control (over one's thoughts, images and attention) predicted changes in the urge to gamble following exposure to a gambling cue. Thirty-one regular poker-machine gamblers who reported at least low levels of problem gambling on the Problem Gambling Severity Index (PGSI), were recruited to complete an online cue-reactivity experiment. Participants completed the PGSI, the visual imagery, rationality and volitional control subscales of the Phenomenology of Consciousness Inventory (PCI), and a visual analogue scale (VAS) assessing urge to gamble. Participants completed the PCI subscales and VAS at baseline, following a neutral video cue and following a gambling video cue. Urge to gamble was found to significantly increase from neutral cue to gambling cue (while controlling for baseline urge) and this increase was predicted by PGSI score. After accounting for the effects of problem-gambling severity, cue-reactive visual imagery, rationality and volitional control significantly improved the prediction of cue-reactive urge to gamble. The small sample size and limited participant characteristic data restricts the generalizability of the findings. Nevertheless, this is the first study to demonstrate that changes in the subjective experience of visual imagery, volitional control and rationality predict changes in the urge to gamble from neutral to gambling cue. The results suggest that visual imagery, rationality and volitional control may play an important role in the experience of the urge to gamble in poker-machine gamblers.

  7. The Effects of Visual Beats on Prosodic Prominence: Acoustic Analyses, Auditory Perception and Visual Perception

    ERIC Educational Resources Information Center

    Krahmer, Emiel; Swerts, Marc

    2007-01-01

    Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…

  8. Visual Features Involving Motion Seen from Airport Control Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion

    2010-01-01

    Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.

  9. Self- and other-agency in people with passivity (first rank) symptoms in schizophrenia.

    PubMed

    Graham-Schmidt, Kyran T; Martin-Iverson, Mathew T; Waters, Flavie A V

    2018-02-01

    Individuals with passivity (first-rank) symptoms report that their actions, thoughts and sensations are influenced or controlled by an external (non-self) agent. Passivity symptoms are closely linked to schizophrenia and related disorders yet they remain poorly understood. One dominant framework posits a role for deficits in the sense of agency. An important question is whether deficits in self-agency can be differentiated from other-agency in schizophrenia and passivity symptoms. This study aimed to evaluate self- and other-agency in 51 people with schizophrenia (n=20 current, 10 past, 21 no history of passivity symptoms), and 48 healthy controls. Participants completed the projected hand illusion (PHI) with active and passive movements, as well as immediate and delayed visual feedback. Experiences of agency and loss of agency over the participant's hand and the image ('the other hand') were assessed with a self-report questionnaire. Those with passivity symptoms (current and past) reported less difference in agency between active and passive movements on items assessing agency over their own hand (but not agency over the other hand). Relative to the healthy controls, the current and never groups continued to experience the illusion with delayed visual feedback suggesting impaired timing mechanisms regardless of symptom profile. These findings are consistent with a reduced contribution of proprioceptive predictive cues to agency judgements specific to self representations in people with passivity symptoms, and a subsequent reliance on external visual cues in these judgements. Altogether, these findings emphasise the multifactorial nature of agency and the contribution of multiple impairments to passivity symptoms. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Sensory convergence in the parieto-insular vestibular cortex

    PubMed Central

    Shinder, Michael E.

    2014-01-01

    Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies. PMID:24671533

  11. Application of Visual Cues on 3D Dynamic Visualizations for Engineering Technology Students and Effects on Spatial Visualization Ability: A Quasi-Experimental Study

    ERIC Educational Resources Information Center

    Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred

    2016-01-01

    Several theorists believe that different types of visual cues influence cognition and behavior through learned associations; however, research provides inconsistent results. Considering this, a quasi-experimental study was done to determine if there are significant positive effects of visual cues (color blue) and to identify if a positive increase…

  12. Visual form predictions facilitate auditory processing at the N1.

    PubMed

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  13. Shifting Attention within Memory Representations Involves Early Visual Areas

    PubMed Central

    Munneke, Jaap; Belopolsky, Artem V.; Theeuwes, Jan

    2012-01-01

    Prior studies have shown that spatial attention modulates early visual cortex retinotopically, resulting in enhanced processing of external perceptual representations. However, it is not clear whether the same visual areas are modulated when attention is focused on, and shifted within a working memory representation. In the current fMRI study participants were asked to memorize an array containing four stimuli. After a delay, participants were presented with a verbal cue instructing them to actively maintain the location of one of the stimuli in working memory. Additionally, on a number of trials a second verbal cue instructed participants to switch attention to the location of another stimulus within the memorized representation. Results of the study showed that changes in the BOLD pattern closely followed the locus of attention within the working memory representation. A decrease in BOLD-activity (V1–V3) was observed at ROIs coding a memory location when participants switched away from this location, whereas an increase was observed when participants switched towards this location. Continuous increased activity was obtained at the memorized location when participants did not switch. This study shows that shifting attention within memory representations activates the earliest parts of visual cortex (including V1) in a retinotopic fashion. We conclude that even in the absence of visual stimulation, early visual areas support shifting of attention within memorized representations, similar to when attention is shifted in the outside world. The relationship between visual working memory and visual mental imagery is discussed in light of the current findings. PMID:22558165

  14. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Benolken, Martha S.

    1993-01-01

    The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  15. The effect of contextual cues on the encoding of motor memories.

    PubMed

    Howard, Ian S; Wolpert, Daniel M; Franklin, David W

    2013-05-01

    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.

  16. Domain general learning: Infants use social and non-social cues when learning object statistics

    PubMed Central

    Barry, Ryan A.; Graf Estes, Katharine; Rivera, Susan M.

    2015-01-01

    Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants’ attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions. PMID:25999879

  17. Visual Orientation in Unfamiliar Gravito-Inertial Environments

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.

    1999-01-01

    The goal of this project is to better understand the process of spatial orientation and navigation in unfamiliar gravito-inertial environments, and ultimately to use this new information to develop effective countermeasures against the orientation and navigation problems experienced by astronauts. How do we know our location, orientation, and motion of our body with respect to the external environment ? On earth, gravity provides a convenient "down" cue. Large body rotations normally occur only in a horizontal plane. In space, the gravitational down cue is absent. When astronauts roll or pitch upside down, they must recognize where things are around them by a process of mental rotation which involves three dimensions, rather than just one. While working in unfamiliar situations they occasionally misinterpret visual cues and experience striking "visual reorientation illusions" (VRIs), in which the walls, ceiling, and floors of the spacecraft exchange subjective identities. VRIs cause disorientation, reaching errors, trigger attacks of space motion sickness, and potentially complicate emergency escape. MIR crewmembers report that 3D relationships between modules - particularly those with different visual verticals - are difficult to visualize, and so navigating through the node that connects them is not instinctive. Crew members learn routes, but their apparent lack of survey knowledge is a concern should fire, power loss, or depressurization limit visibility. Anecdotally, experience in mockups, parabolic flight, neutral buoyancy and virtual reality (VR) simulators helps. However, no techniques have been developed to quantify individual differences in orientation and navigation abilities, or the effectiveness of preflight visual. orientation training. Our understanding of the underlying physiology - for example how our sense of place and orientation is neurally coded in three dimensions in the limbic system of the brain - is incomplete. During the 16 months that this human and animal research project has been underway, we have obtained several results that are not only of basic research interest, but which have practical implications for the architecture and layout of spacecraft interiors and for the development of astronaut spatial orientation training countermeasures.

  18. Cross-Sensory Transfer of Reference Frames in Spatial Memory

    ERIC Educational Resources Information Center

    Kelly, Jonathan W.; Avraamides, Marios N.

    2011-01-01

    Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…

  19. Attentional bias to food-related visual cues: is there a role in obesity?

    PubMed

    Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M

    2015-02-01

    The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.

  20. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  1. Capuchin monkeys (Cebus apella) use positive, but not negative, auditory cues to infer food location.

    PubMed

    Heimbauer, Lisa A; Antworth, Rebecca L; Owren, Michael J

    2012-01-01

    Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present ("positive" cuing) or absent ("negative" cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.

  2. Differential processing of binocular and monocular gloss cues in human visual cortex

    PubMed Central

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  3. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  4. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  5. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  6. Cues Resulting in Desire for Sexual Activity in Women

    PubMed Central

    McCall, Katie; Meston, Cindy

    2010-01-01

    Introduction A number of questionnaires have been created to assess levels of sexual desire in women, but to our knowledge, there are currently no validated measures for assessing cues that result in sexual desire. A questionnaire of this nature could be useful for both clinicians and researchers, because it considers the contextual nature of sexual desire and it draws attention to individual differences in factors that can contribute to sexual desire. Aim The aim of the present study was to create a multidimensional assessment tool of cues for sexual desire in women that is validated in women with and without hypoactive sexual desire disorder (HSDD). Methods Factor analyses conducted on both an initial sample (N = 874) and a community sample (N = 138) resulted in the Cues for Sexual Desire Scale (CSDS) which included four factors: (i) Emotional Bonding Cues; (ii) Erotic/ Explicit Cues; (iii) Visual/Proximity Cues; and (iv) Implicit/Romantic Cues. Main Outcome Measures Scale construction of cues associated with sexual desire and differences between women with and without sexual dysfunction. Results The CSDS demonstrated good reliability and validity and was able to detect significant differences between women with and without HSDD. Results from regression analyses indicated that both marital status and level of sexual functioning predicted scores on the CSDS. The CSDS provided predictive validity for the Female Sexual Function Index desire and arousal domain scores, and increased cues were related to a higher reported frequency of sexual activity in women. Conclusions The findings from the present study provide valuable information regarding both internal and external triggers that can result in sexual desire for women. We believe that the CSDS could be beneficial in therapeutic settings to help identify cues that do and do not facilitate sexual desire in women with clinically diagnosed desire difficulties. PMID:16942529

  7. The Effects of Explicit Visual Cues in Reading Biological Diagrams

    ERIC Educational Resources Information Center

    Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua

    2017-01-01

    Drawing on cognitive theories, this study intends to investigate the effects of explicit visual cues which have been proposed as a critical factor in facilitating understanding of biological images. Three diagrams from Taiwanese textbooks with implicit visual cues, involving the concepts of biological classification systems, fish taxonomy, and…

  8. Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus

    PubMed Central

    Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.

    2013-01-01

    Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713

  9. Cannabis cue-induced brain activation correlates with drug craving in limbic and visual salience regions: Preliminary results

    PubMed Central

    Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.

    2013-01-01

    Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535

  10. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  11. Seeing is believing: information content and behavioural response to visual and chemical cues

    PubMed Central

    Gonzálvez, Francisco G.; Rodríguez-Gironés, Miguel A.

    2013-01-01

    Predator avoidance and foraging often pose conflicting demands. Animals can decrease mortality risk searching for predators, but searching decreases foraging time and hence intake. We used this principle to investigate how prey should use information to detect, assess and respond to predation risk from an optimal foraging perspective. A mathematical model showed that solitary bees should increase flower examination time in response to predator cues and that the rate of false alarms should be negatively correlated with the relative value of the flower explored. The predatory ant, Oecophylla smaragdina, and the harmless ant, Polyrhachis dives, differ in the profile of volatiles they emit and in their visual appearance. As predicted, the solitary bee Nomia strigata spent more time examining virgin flowers in presence of predator cues than in their absence. Furthermore, the proportion of flowers rejected decreased from morning to noon, as the relative value of virgin flowers increased. In addition, bees responded differently to visual and chemical cues. While chemical cues induced bees to search around flowers, bees detecting visual cues hovered in front of them. These strategies may allow prey to identify the nature of visual cues and to locate the source of chemical cues. PMID:23698013

  12. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  13. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Benolken, M. S.

    1995-01-01

    The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  14. Differential processing of binocular and monocular gloss cues in human visual cortex.

    PubMed

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  15. Out of sight, out of mind: racial retrieval cues increase the accessibility of social justice concepts.

    PubMed

    Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T

    2017-09-01

    Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.

  16. Capture of Xylosandrus crassiusculus and other Scolytinae (Coleoptera, Curculionidae) in response to visual and volatile cues

    USDA-ARS?s Scientific Manuscript database

    In June and July 2011 traps were deployed in Tuskegee National Forest, Macon County, Alabama to test the influence of chemical and visual cues on for the capture of bark and ambrosia beetles (Coleoptera: Curculionidae: Scolytinae). \\using chemical and visual cues. The first experiment investigated t...

  17. Impact of Visual, Vocal, and Lexical Cues on Judgments of Counselor Qualities

    ERIC Educational Resources Information Center

    Strahan, Carole; Zytowski, Donald G.

    1976-01-01

    Undergraduate students (N=130) rated Carl Rogers via visual, lexical, vocal, or vocal-lexical communication channels. Lexical cues were more important in creating favorable impressions among females. Subsequent exposure to combined visual-vocal-lexical cues resulted in warmer and less distant ratings, but not on a consistent basis. (Author)

  18. Visual and auditory cue integration for the generation of saccadic eye movements in monkeys and lever pressing in humans.

    PubMed

    Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M

    2012-08-01

    This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  19. Negative mood increases selective attention to food cues and subjective appetite.

    PubMed

    Hepworth, Rebecca; Mogg, Karin; Brignell, Catherine; Bradley, Brendan P

    2010-02-01

    Following negative reinforcement and affect-regulation models of dysfunctional appetitive motivation, this study examined the effect of negative mood on objective and subjective cognitive indices of motivation for food; i.e., attentional bias for food cues and self-reported hunger/urge to eat, respectively. The study extended previous research on the effect of mood on food motivation by using (i) an experimental mood manipulation, (ii) an established index of attentional bias from the visual-probe task and (iii) pictorial food cues, which have greater ecological validity than word stimuli. Young female adults (n=80) were randomly allocated to a neutral or negative mood induction procedure. Attentional biases were assessed at two cue exposure durations (500 and 2000ms). Results showed that negative mood increased both attentional bias for food cues and subjective appetite. Attentional bias and subjective appetite were positively inter-correlated, suggesting a common mechanism, i.e. activation of the food-reward system. Attentional bias was also associated with trait eating style, such as external and restrained eating. Thus, current mood and trait eating style each influenced motivation for food (as reflected by subjective appetite and attentional bias). Findings relate to models of cognitive mechanisms underlying normal and dysfunctional appetitive motivation and eating behaviour. 2009 Elsevier Ltd. All rights reserved.

  20. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  1. Ocean acidification and responses to predators: can sensory redundancy reduce the apparent impacts of elevated CO2 on fish?

    PubMed

    Lönnstedt, Oona M; Munday, Philip L; McCormick, Mark I; Ferrari, Maud C O; Chivers, Douglas P

    2013-09-01

    Carbon dioxide (CO2) levels in the atmosphere and surface ocean are rising at an unprecedented rate due to sustained and accelerating anthropogenic CO2 emissions. Previous studies have documented that exposure to elevated CO2 causes impaired antipredator behavior by coral reef fish in response to chemical cues associated with predation. However, whether ocean acidification will impair visual recognition of common predators is currently unknown. This study examined whether sensory compensation in the presence of multiple sensory cues could reduce the impacts of ocean acidification on antipredator responses. When exposed to seawater enriched with levels of CO2 predicted for the end of this century (880 μatm CO2), prey fish completely lost their response to conspecific alarm cues. While the visual response to a predator was also affected by high CO2, it was not entirely lost. Fish exposed to elevated CO2, spent less time in shelter than current-day controls and did not exhibit antipredator signaling behavior (bobbing) when multiple predator cues were present. They did, however, reduce feeding rate and activity levels to the same level as controls. The results suggest that the response of fish to visual cues may partially compensate for the lack of response to chemical cues. Fish subjected to elevated CO2 levels, and exposed to chemical and visual predation cues simultaneously, responded with the same intensity as controls exposed to visual cues alone. However, these responses were still less than control fish simultaneously exposed to chemical and visual predation cues. Consequently, visual cues improve antipredator behavior of CO2 exposed fish, but do not fully compensate for the loss of response to chemical cues. The reduced ability to correctly respond to a predator will have ramifications for survival in encounters with predators in the field, which could have repercussions for population replenishment in acidified oceans.

  2. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    PubMed Central

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  3. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.

    PubMed

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.

  4. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection.

    PubMed

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection "rich" cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection.

  5. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    PubMed Central

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection “rich” cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection. PMID:25520683

  6. A magnetoencephalography study of visual processing of pain anticipation.

    PubMed

    Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C

    2014-07-15

    Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.

  7. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  8. Unconscious cues bias first saccades in a free-saccade task.

    PubMed

    Huang, Yu-Feng; Tan, Edlyn Gui Fang; Soon, Chun Siong; Hsieh, Po-Jang

    2014-10-01

    Visual-spatial attention can be biased towards salient visual information without visual awareness. It is unclear, however, whether such bias can further influence free-choices such as saccades in a free viewing task. In our experiment, we presented visual cues below awareness threshold immediately before people made free saccades. Our results showed that masked cues could influence the direction and latency of the first free saccade, suggesting that salient visual information can unconsciously influence free actions. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Suggested Interactivity: Seeking Perceived Affordances for Information Visualization.

    PubMed

    Boy, Jeremy; Eveillard, Louis; Detienne, Françoise; Fekete, Jean-Daniel

    2016-01-01

    In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.

  10. External and internal facial features modulate processing of vertical but not horizontal spatial relations.

    PubMed

    Meinhardt, Günter; Kurbel, David; Meinhardt-Injac, Bozana; Persike, Malte

    2018-03-22

    Some years ago an asymmetry was reported for the inversion effect for horizontal (H) and vertical (V) relational face manipulations (Goffaux & Rossion, 2007). Subsequent research examined whether a specific disruption of long-range relations underlies the H/V inversion asymmetry (Sekunova & Barton, 2008). Here, we tested how detection of changes in interocular distance (H) and eye height (V) depends on cardinal internal features and external feature surround. Results replicated the H/V inversion asymmetry. Moreover, we found very different face cue dependencies for both change types. Performance and inversion effects did not depend on the presence of other face cues for detecting H changes. In contrast, accuracy for detecting V changes strongly depended on internal and external features, showing cumulative improvement when more cues were added. Inversion effects were generally large, and larger with external feature surround. The cue independence in detecting H relational changes indicates specialized local processing tightly tuned to the eyes region, while the strong cue dependency in detecting V relational changes indicates a global mechanism of cue integration across different face regions. These findings suggest that the H/V asymmetry of the inversion effect rests on an H/V anisotropy of face cue dependency, since only the global V mechanism suffers from disruption of cue integration as the major effect of face inversion. Copyright © 2018. Published by Elsevier Ltd.

  11. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  12. Retrospective cues based on object features improve visual working memory performance in older adults.

    PubMed

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  13. Chemical and visual communication during mate searching in rock shrimp.

    PubMed

    Díaz, Eliecer R; Thiel, Martin

    2004-06-01

    Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.

  14. Can Short Duration Visual Cues Influence Students' Reasoning and Eye Movements in Physics Problems?

    ERIC Educational Resources Information Center

    Madsen, Adrian; Rouinfar, Amy; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay

    2013-01-01

    We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the…

  15. Sensitivity to Visual Prosodic Cues in Signers and Nonsigners

    ERIC Educational Resources Information Center

    Brentari, Diane; Gonzalez, Carolina; Seidl, Amanda; Wilbur, Ronnie

    2011-01-01

    Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested…

  16. Enhancing Learning from Dynamic and Static Visualizations by Means of Cueing

    ERIC Educational Resources Information Center

    Kuhl, Tim; Scheiter, Katharina; Gerjets, Peter

    2012-01-01

    The current study investigated whether learning from dynamic and two presentation formats for static visualizations can be enhanced by means of cueing. One hundred and fifty university students were randomly assigned to six conditions, resulting from a 2x3-design, with cueing (with/without) and type of visualization (dynamic, static-sequential,…

  17. Motivating contributions to online forums: can locus of control moderate the effects of interface cues?

    PubMed

    Kim, Hyang-Sook; Sundar, S Shyam

    2016-01-01

    In an effort to encourage users to participate rather than lurk, online health forums provide authority badges (e.g., guru) to frequent contributors and popularity indicators (e.g., number of views) to their postings. Studies have shown the latter to be more effective, implying that bulletin-board users are motivated by external validation of their contributions. However, no consideration has yet been given to individual differences in the influence of such popularity indicators. Personality psychology suggests that individuals with external, rather than internal, locus of control are more likely to be other-directed and therefore more likely to be motivated by interface cues showing the bandwagon effect of their online posts. We investigate this hypothesis by analyzing data from a 2 (high vs. low authority cue) × 2 (strong vs. weak bandwagon cue) experiment with an online health community. Results show that strong bandwagon cues promote sense of community among users with internal, rather than external, locus of control. When bandwagon cues are weak, bestowal of high authority serves to heighten their sense of agency. Contrary to prediction, weak bandwagon cues appear to promote sense of community and sense of agency among those with external locus of control. Theoretical and practical implications are discussed.

  18. Con-Text: Text Detection for Fine-grained Object Classification.

    PubMed

    Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo

    2017-05-24

    This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).

  19. Location cue validity affects inhibition of return of visual processing.

    PubMed

    Wright, R D; Richard, C M

    2000-01-01

    Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.

  20. Soldier-Robot Team Communication: An Investigation of Exogenous Orienting Visual Display Cues and Robot Reporting Preferences

    DTIC Science & Technology

    2018-02-12

    usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4

  1. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    PubMed

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  2. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete

    PubMed Central

    Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.

    2013-01-01

    Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140

  3. Does Vaping in E-Cigarette Advertisements Affect Tobacco Smoking Urge, Intentions, and Perceptions in Daily, Intermittent, and Former Smokers?

    PubMed

    Maloney, Erin K; Cappella, Joseph N

    2016-01-01

    Visual depictions of vaping in electronic cigarette advertisements may serve as smoking cues to smokers and former smokers, increasing urge to smoke and smoking behavior, and decreasing self-efficacy, attitudes, and intentions to quit or abstain. After assessing baseline urge to smoke, 301 daily smokers, 272 intermittent smokers, and 311 former smokers were randomly assigned to view three e-cigarette commercials with vaping visuals (the cue condition) or without vaping visuals (the no-cue condition), or to answer unrelated media use questions (the no-ad condition). Participants then answered a posttest questionnaire assessing the outcome variables of interest. Relative to other conditions, in the cue condition, daily smokers reported greater urge to smoke a tobacco cigarette and a marginally significantly greater incidence of actually smoking a tobacco cigarette during the experiment. Former smokers in the cue condition reported lower intentions to abstain from smoking than former smokers in other conditions. No significant differences emerged among intermittent smokers across conditions. These data suggest that visual depictions of vaping in e-cigarette commercials increase daily smokers' urge to smoke cigarettes and may lead to more actual smoking behavior. For former smokers, these cues in advertising may undermine abstinence efforts. Intermittent smokers did not appear to be reactive to these cues. A lack of significant differences between participants in the no-cue and no-ad conditions compared to the cue condition suggests that visual depictions of e-cigarettes and vaping function as smoking cues, and cue reactivity is the mechanism through which these effects were obtained.

  4. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    PubMed

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Sight or Scent: Lemur Sensory Reliance in Detecting Food Quality Varies with Feeding Ecology

    PubMed Central

    Rushmore, Julie; Leonhardt, Sara D.; Drea, Christine M.

    2012-01-01

    Visual and olfactory cues provide important information to foragers, yet we know little about species differences in sensory reliance during food selection. In a series of experimental foraging studies, we examined the relative reliance on vision versus olfaction in three diurnal, primate species with diverse feeding ecologies, including folivorous Coquerel's sifakas (Propithecus coquereli), frugivorous ruffed lemurs (Varecia variegata spp), and generalist ring-tailed lemurs (Lemur catta). We used animals with known color-vision status and foods for which different maturation stages (and hence quality) produce distinct visual and olfactory cues (the latter determined chemically). We first showed that lemurs preferentially selected high-quality foods over low-quality foods when visual and olfactory cues were simultaneously available for both food types. Next, using a novel apparatus in a series of discrimination trials, we either manipulated food quality (while holding sensory cues constant) or manipulated sensory cues (while holding food quality constant). Among our study subjects that showed relatively strong preferences for high-quality foods, folivores required both sensory cues combined to reliably identify their preferred foods, whereas generalists could identify their preferred foods using either cue alone, and frugivores could identify their preferred foods using olfactory, but not visual, cues alone. Moreover, when only high-quality foods were available, folivores and generalists used visual rather than olfactory cues to select food, whereas frugivores used both cue types equally. Lastly, individuals in all three of the study species predominantly relied on sight when choosing between low-quality foods, but species differed in the strength of their sensory biases. Our results generally emphasize visual over olfactory reliance in foraging lemurs, but we suggest that the relative sensory reliance of animals may vary with their feeding ecology. PMID:22870229

  6. The time course of protecting a visual memory representation from perceptual interference

    PubMed Central

    van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian

    2015-01-01

    Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555

  7. Accessing long-term memory representations during visual change detection.

    PubMed

    Beck, Melissa R; van Lamsweerde, Amanda E

    2011-04-01

    In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.

  8. Role of Self-Generated Odor Cues in Contextual Representation

    PubMed Central

    Aikath, Devdeep; Weible, Aldis P; Rowland, David C; Kentros, Clifford G

    2014-01-01

    As first demonstrated in the patient H.M., the hippocampus is critically involved in forming episodic memories, the recall of “what” happened “where” and “when.” In rodents, the clearest functional correlate of hippocampal primary neurons is the place field: a cell fires predominantly when the animal is in a specific part of the environment, typically defined relative to the available visuospatial cues. However, rodents have relatively poor visual acuity. Furthermore, they are highly adept at navigating in total darkness. This raises the question of how other sensory modalities might contribute to a hippocampal representation of an environment. Rodents have a highly developed olfactory system, suggesting that cues such as odor trails may be important. To test this, we familiarized mice to a visually cued environment over a number of days while maintaining odor cues. During familiarization, self-generated odor cues unique to each animal were collected by re-using absorbent paperboard flooring from one session to the next. Visual and odor cues were then put in conflict by counter-rotating the recording arena and the flooring. Perhaps surprisingly, place fields seemed to follow the visual cue rotation exclusively, raising the question of whether olfactory cues have any influence at all on a hippocampal spatial representation. However, subsequent removal of the familiar, self-generated odor cues severely disrupted both long-term stability and rotation to visual cues in a novel environment. Our data suggest that odor cues, in the absence of additional rule learning, do not provide a discriminative spatial signal that anchors place fields. Such cues do, however, become integral to the context over time and exert a powerful influence on the stability of its hippocampal representation. © 2014 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:24753119

  9. Low-level visual attention and its relation to joint attention in autism spectrum disorder.

    PubMed

    Jaworski, Jessica L Bean; Eigsti, Inge-Marie

    2017-04-01

    Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.

  10. Phasic alertness cues modulate visual processing speed in healthy aging.

    PubMed

    Haupt, Marleen; Sorg, Christian; Napiórkowski, Natan; Finke, Kathrin

    2018-05-31

    Warning signals temporarily increase the rate of visual information in younger participants and thus optimize perception in critical situations. It is unclear whether such important preparatory processes are preserved in healthy aging. We parametrically assessed the effects of auditory alertness cues on visual processing speed and their time course using a whole report paradigm based on the computational Theory of Visual Attention. We replicated prior findings of significant alerting benefits in younger adults. In conditions with short cue-target onset asynchronies, this effect was baseline-dependent. As younger participants with high baseline speed did not show a profit, an inverted U-shaped function of phasic alerting and visual processing speed was implied. Older adults also showed a significant cue-induced benefit. Bayesian analyses indicated that the cueing benefit on visual processing speed was comparably strong across age groups. Our results indicate that in aging individuals, comparable to younger ones, perception is active and increased expectancy of the appearance of a relevant stimulus can increase the rate of visual information uptake. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    PubMed Central

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353

  12. The Effects of Spatial Endogenous Pre-cueing across Eccentricities.

    PubMed

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.

  13. Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments

    ERIC Educational Resources Information Center

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2014-01-01

    Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…

  14. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  15. Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.

    PubMed

    Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu

    2015-09-30

    Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.

  16. Exposure to food cues moderates the indirect effect of reward sensitivity and external eating via implicit eating expectancies.

    PubMed

    Maxwell, Aimee L; Loxton, Natalie J; Hennegan, Julie M

    2017-04-01

    Previous research has suggested that the expectancy "eating is rewarding" is one pathway driving the relationship between trait reward sensitivity and externally-driven eating. The aim of the current study was to extend previous research by examining the conditions under which the indirect effect of reward sensitivity and external eating via this eating expectancy occurs. Using a conditional indirect effects approach we tested the moderating effect of exposure to food cues (e.g., images) relative to non-food cues on the association between reward sensitivity and external eating, via eating expectancies. Participants (N = 119, M = 18.67 years of age, SD = 2.40) were university women who completed a computerised food expectancies task (E-TASK) in which they were randomly assigned to either an appetitive food cue condition or non-food cue condition and then responded to a series of eating expectancy statements or self-description personality statements. Participants also completed self-report trait measures of reward sensitivity in addition to measures of eating expectancies (i.e., endorsement of the belief that eating is a rewarding experience). Results revealed higher reward sensitivity was associated with faster reaction times to the eating expectancies statement. This was moderated by cue-condition such that the association between reward sensitivity and faster reaction time was only found in the food cue condition. Faster endorsement of this belief (i.e., reaction time) was also associated with greater external eating. These results provide additional support for the proposal that individuals high in reward sensitivity form implicit associations with positive beliefs about eating when exposed to food cues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    PubMed

    Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.

  18. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults

    PubMed Central

    Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath

    2016-01-01

    Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343

  19. Visual selective attention in amnestic mild cognitive impairment.

    PubMed

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  1. Opposite Effects of Visual Cueing During Writing-Like Movements of Different Amplitudes in Parkinson's Disease.

    PubMed

    Nackaerts, Evelien; Nieuwboer, Alice; Broeder, Sanne; Smits-Engelsman, Bouwien C M; Swinnen, Stephan P; Vandenberghe, Wim; Heremans, Elke

    2016-06-01

    Handwriting is often impaired in Parkinson's disease (PD). Several studies have shown that writing in PD benefits from the use of cues. However, this was typically studied with writing and drawing sizes that are usually not used in daily life. This study examines the effect of visual cueing on a prewriting task at small amplitudes (≤1.0 cm) in PD patients and healthy controls to better understand the working action of cueing for writing. A total of 15 PD patients and 15 healthy, age-matched controls performed a prewriting task at 0.6 cm and 1.0 cm in the presence and absence of visual cues (target lines). Writing amplitude, variability of amplitude, and speed were chosen as dependent variables, measured using a newly developed touch-sensitive tablet. Cueing led to immediate improvements in writing size, variability of writing size, and speed in both groups in the 1.0 cm condition. However, when writing at 0.6 cm with cues, a decrease in writing size was apparent in both groups (P < .001) and the difference in variability of amplitude between cued and uncued writing disappeared. In addition, the writing speed of controls decreased when the cue was present. Visual target lines of 1.0 cm improved the writing of sequential loops in contrast to lines spaced at 0.6 cm. These results illustrate that, unlike for gait, visual cueing for fine-motor tasks requires a differentiated approach, taking into account the possible increases of accuracy constraints imposed by cueing. © The Author(s) 2015.

  2. Analyzing the Role of Visual Cues in Developing Prediction-Making Skills of Third- and Ninth-Grade English Language Learners

    ERIC Educational Resources Information Center

    Campbell, Emily; Cuba, Melissa

    2015-01-01

    The goal of this action research is to increase student awareness of context clues, with an emphasis on student use of visual cues in making predictions. Visual cues in the classroom were used to differentiate according to the needs of student demographics (Herrera, Perez, & Escamilla, 2010). The purpose of this intervention was to improve…

  3. The Effects of Various Fidelity Factors on Simulated Helicopter Hover

    DTIC Science & Technology

    1981-01-01

    18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation

  4. Learning from Instructional Animations: How Does Prior Knowledge Mediate the Effect of Visual Cues?

    ERIC Educational Resources Information Center

    Arslan-Ari, I.

    2018-01-01

    The purpose of this study was to investigate the effects of cueing and prior knowledge on learning and mental effort of students studying an animation with narration. This study employed a 2 (no cueing vs. visual cueing) × 2 (low vs. high prior knowledge) between-subjects factorial design. The results revealed a significant interaction effect…

  5. Anemonefishes rely on visual and chemical cues to correctly identify conspecifics

    NASA Astrophysics Data System (ADS)

    Johnston, Nicole K.; Dixson, Danielle L.

    2017-09-01

    Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.

  6. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  7. Laserlight cues for gait freezing in Parkinson's disease: an open-label study.

    PubMed

    Donovan, S; Lim, C; Diaz, N; Browner, N; Rose, P; Sudarsky, L R; Tarsy, D; Fahn, S; Simon, D K

    2011-05-01

    Freezing of gait (FOG) and falls are major sources of disability for Parkinson's disease (PD) patients, and show limited responsiveness to medications. We assessed the efficacy of visual cues for overcoming FOG in an open-label study of 26 patients with PD. The change in the frequency of falls was a secondary outcome measure. Subjects underwent a 1-2 month baseline period of use of a cane or walker without visual cues, followed by 1 month using the same device with the laserlight visual cue. The laserlight visual cue was associated with a modest but significant mean reduction in FOG Questionnaire (FOGQ) scores of 1.25 ± 0.48 (p = 0.0152, two-tailed paired t-test), representing a 6.6% improvement compared to the mean baseline FOGQ scores of 18.8. The mean reduction in fall frequency was 39.5 ± 9.3% with the laserlight visual cue among subjects experiencing at least one fall during the baseline and subsequent study periods (p = 0.002; two-tailed one-sample t-test with hypothesized mean of 0). Though some individual subjects may have benefited, the overall mean performance on the timed gait test (TGT) across all subjects did not significantly change. However, among the 4 subjects who underwent repeated testing of the TGT, one showed a 50% mean improvement in TGT performance with the laserlight visual cue (p = 0.005; two-tailed paired t-test). This open-label study provides evidence for modest efficacy of a laserlight visual cue in overcoming FOG and reducing falls in PD patients. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  9. Gaze-contingent reinforcement learning reveals incentive value of social signals in young children and adults.

    PubMed

    Vernetti, Angélina; Smith, Tim J; Senju, Atsushi

    2017-03-15

    While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue-reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue-reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. © 2017 The Authors.

  10. Contextual cueing impairment in patients with age-related macular degeneration.

    PubMed

    Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan

    2013-09-12

    Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.

  11. Using multisensory cues to facilitate air traffic management.

    PubMed

    Ngo, Mary K; Pierce, Russell S; Spence, Charles

    2012-12-01

    In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.

  12. The Development of Knowledge of an External Retrieval Cue Strategy.

    ERIC Educational Resources Information Center

    Ritter, Kenneth

    1978-01-01

    Investigated preschool and third grade children's metamnemonic knowledge that in order to serve as an efficient retrieval cue of the location of a hidden object, an external marker sign must differentiate it from other locations. (JMB)

  13. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  14. Visual gate for brain-computer interfaces.

    PubMed

    Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H

    2009-01-01

    Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.

  15. The benefit of forgetting.

    PubMed

    Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F

    2013-04-01

    Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.

  16. Tactical decisions for changeable cuttlefish camouflage: visual cues for choosing masquerade are relevant from a greater distance than visual cues used for background matching.

    PubMed

    Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T

    2015-10-01

    Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.

  17. Neural Representation of Motion-In-Depth in Area MT

    PubMed Central

    Sanada, Takahisa M.

    2014-01-01

    Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481

  18. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  19. Determinants of structural choice in visually situated sentence production.

    PubMed

    Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph

    2012-11-01

    Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Selective maintenance in visual working memory does not require sustained visual attention.

    PubMed

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M

    2013-08-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved

  1. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    PubMed

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  2. The Effect of Visual Cueing and Control Design on Children's Reading Achievement of Audio E-Books with Tablet Computers

    ERIC Educational Resources Information Center

    Wang, Pei-Yu; Huang, Chung-Kai

    2015-01-01

    This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…

  3. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    PubMed

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  4. Neural substrates of smoking cue reactivity: A meta-analysis of fMRI studies

    PubMed Central

    Engelmann, Jeffrey M.; Versace, Francesco; Robinson, Jason D.; Minnix, Jennifer A.; Lam, Cho Y.; Cui, Yong; Brown, Victoria L.; Cinciripini, Paul M.

    2012-01-01

    Reactivity to smoking-related cues may be an important factor that precipitates relapse in smokers who are trying to quit. The neurobiology of smoking cue reactivity has been investigated in several fMRI studies. We combined the results of these studies using activation likelihood estimation, a meta-analytic technique for fMRI data. Results of the meta-analysis indicated that smoking cues reliably evoke larger fMRI responses than neutral cues in the extended visual system, precuneus, posterior cingulate gyrus, anterior cingulate gyrus, dorsal and medial prefrontal cortex, insula, and dorsal striatum. Subtraction meta-analyses revealed that parts of the extended visual system and dorsal prefrontal cortex are more reliably responsive to smoking cues in deprived smokers than in non-deprived smokers, and that short-duration cues presented in event-related designs produce larger responses in the extended visual system than long-duration cues presented in blocked designs. The areas that were found to be responsive to smoking cues agree with theories of the neurobiology of cue reactivity, with two exceptions. First, there was a reliable cue reactivity effect in the precuneus, which is not typically considered a brain region important to addiction. Second, we found no significant effect in the nucleus accumbens, an area that plays a critical role in addiction, but this effect may have been due to technical difficulties associated with measuring fMRI data in that region. The results of this meta-analysis suggest that the extended visual system should receive more attention in future studies of smoking cue reactivity. PMID:22206965

  5. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  6. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    ERIC Educational Resources Information Center

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  7. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  8. Preschoolers Benefit from Visually Salient Speech Cues

    ERIC Educational Resources Information Center

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  9. Directional responding of C57BL/6J mice in the Morris water maze is influenced by visual and vestibular cues and is dependent upon the anterior thalamic nuclei

    PubMed Central

    Stackman, Robert W.; Lora, Joan C.; Williams, Sidney B.

    2012-01-01

    Recent findings indicate that rats navigate in spatial tasks such as the Morris water maze (MWM) using a local cue-based reference frame rather than a distal cue-based reference frame. Specifically, rats swim in a particular direction to a location relative to pool-based cues, rather than to an absolute location defined by room-based cues. Neural mechanisms supporting this bias in rodents for relative responding in spatial tasks are not yet understood. Anterior thalamic neurons discharge according to the current directional heading of the animal. The contribution of head direction (HD) cell activity to navigation has been difficult to elucidate. We found that male C57BL/6J mice trained for 4 or 7 days in the MWM exhibited an overwhelming preference for swimming in a direction relative to pool-based cues over absolute responding during a platform-less probe test. Rotation of extra-maze cues caused a corresponding rotation of the direction mice swam during probe test, suggesting that both pool- and room-based reference frames guide platform search. However, disorienting the mice before the probe test disturbed relative responding. Therefore, relative responding is guided by both internal and external cue sources. Selective inactivation of anterior thalamic nuclei (ATN) by microinfusion of muscimol or fluorophore-conjugated muscimol caused a near complete shift in preference from relative to absolute responding. Interestingly, inactivation of the dorsal CA1 region of the hippocampus did not affect relative responding. These data suggest that ATN, and HD cells therein, may guide relative responding in the MWM, a task considered by most to reflect hippocampal processing. PMID:22836256

  10. Investigating the Prospective Sense of Agency: Effects of Processing Fluency, Stimulus Ambiguity, and Response Conflict.

    PubMed

    Sidarus, Nura; Vuorre, Matti; Metcalfe, Janet; Haggard, Patrick

    2017-01-01

    How do we know how much control we have over our environment? The sense of agency refers to the feeling that we are in control of our actions, and that, through them, we can control our external environment. Thus, agency clearly involves matching intentions, actions, and outcomes. The present studies investigated the possibility that processes of action selection, i.e., choosing what action to make, contribute to the sense of agency. Since selection of action necessarily precedes execution of action, such effects must be prospective. In contrast, most literature on sense of agency has focussed on the retrospective computation whether an outcome fits the action performed or intended. This hypothesis was tested in an ecologically rich, dynamic task based on a computer game. Across three experiments, we manipulated three different aspects of action selection processing: visual processing fluency, categorization ambiguity, and response conflict. Additionally, we measured the relative contributions of prospective, action selection-based cues, and retrospective, outcome-based cues to the sense of agency. Manipulations of action selection were orthogonally combined with discrepancy of visual feedback of action. Fluency of action selection had a small but reliable effect on the sense of agency. Additionally, as expected, sense of agency was strongly reduced when visual feedback was discrepant with the action performed. The effects of discrepant feedback were larger than the effects of action selection fluency, and sometimes suppressed them. The sense of agency is highly sensitive to disruptions of action-outcome relations. However, when motor control is successful, and action-outcome relations are as predicted, fluency or dysfluency of action selection provides an important prospective cue to the sense of agency.

  11. Investigating the Prospective Sense of Agency: Effects of Processing Fluency, Stimulus Ambiguity, and Response Conflict

    PubMed Central

    Sidarus, Nura; Vuorre, Matti; Metcalfe, Janet; Haggard, Patrick

    2017-01-01

    How do we know how much control we have over our environment? The sense of agency refers to the feeling that we are in control of our actions, and that, through them, we can control our external environment. Thus, agency clearly involves matching intentions, actions, and outcomes. The present studies investigated the possibility that processes of action selection, i.e., choosing what action to make, contribute to the sense of agency. Since selection of action necessarily precedes execution of action, such effects must be prospective. In contrast, most literature on sense of agency has focussed on the retrospective computation whether an outcome fits the action performed or intended. This hypothesis was tested in an ecologically rich, dynamic task based on a computer game. Across three experiments, we manipulated three different aspects of action selection processing: visual processing fluency, categorization ambiguity, and response conflict. Additionally, we measured the relative contributions of prospective, action selection-based cues, and retrospective, outcome-based cues to the sense of agency. Manipulations of action selection were orthogonally combined with discrepancy of visual feedback of action. Fluency of action selection had a small but reliable effect on the sense of agency. Additionally, as expected, sense of agency was strongly reduced when visual feedback was discrepant with the action performed. The effects of discrepant feedback were larger than the effects of action selection fluency, and sometimes suppressed them. The sense of agency is highly sensitive to disruptions of action-outcome relations. However, when motor control is successful, and action-outcome relations are as predicted, fluency or dysfluency of action selection provides an important prospective cue to the sense of agency. PMID:28450839

  12. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  13. Age-related changes in human posture control: Sensory organization tests

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Black, F. O.

    1989-01-01

    Postural control was measured in 214 human subjects ranging in age from 7 to 81 years. Sensory organization tests measured the magnitude of anterior-posterior body sway during six 21 s trials in which visual and somatosensory orientation cues were altered (by rotating the visual surround and support surface in proportion to the subject's sway) or vision eliminated (eyes closed) in various combinations. No age-related increase in postural sway was found for subjects standing on a fixed support surface with eyes open or closed. However, age-related increases in sway were found for conditions involving altered visual or somatosensory cues. Subjects older than about 55 years showed the largest sway increases. Subjects younger than about 15 years were also sensitive to alteration of sensory cues. On average, the older subjects were more affected by altered visual cues whereas younger subjects had more difficulty with altered somatosensory cues.

  14. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    PubMed

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Cognitive control over visual food cue saliency is greater in reduced-overweight/obese but not in weight relapsed women: An EEG study.

    PubMed

    Hume, David John; Howells, Fleur Margaret; Karpul, David; Rauch, H G Laurie; Kroff, Jacolene; Lambert, Estelle Victoria

    2015-12-01

    Poor weight management may relate to a reduction in neurobehavioural control over food intake and heightened reactivity of the brain's neural reward pathways. Here we explore the neurophysiology of food-related visual cue processing in weight reduced and weight relapsed women by assessing differences in cortical arousal and attentional processing using a food-Stroop paradigm. 51 women were recruited into 4 groups: reduced-weight participants (RED, n=14) compared to BMI matched low-weight controls (LW-CTL, n=18); and weight relapsed participants (REL, n=10) compared to BMI matched high-weight controls (HW-CTL, n=9). Eating behaviour and body image questionnaires were completed. Two Stroop tasks (one containing food images, the other containing neutral images) were completed with record of electroencephalography (EEG). Differences in cortical arousal were found in RED versus LW-CTL women, and were seen during food task execution only. Compared to their controls, RED women exhibited lower relative delta band power (p=0.01) and higher relative beta band power (p=0.01) over the right frontal cortex (F4). Within the RED group, delta band oscillations correlated positively with self-reported habitual fat intake and with body shape dissatisfaction. As compared to women matched for phenotype but with no history of weight reduction, reduced-overweight/obese women show increased neurobehavioural control over external food cues and the inhibition of reward-orientated feeding responses. Insight into these self-regulatory mechanisms which attenuate food cue saliency may aid in the development of cognitive remediation therapies which facilitate long-term weight loss. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. The role of visuohaptic experience in visually perceived depth.

    PubMed

    Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S

    2009-06-01

    Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.

  17. Empirical comparison of a fixed-base and a moving-base simulation of a helicopter engaged in visually conducted slalom runs

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.

    1977-01-01

    Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.

  18. Social Vision: Functional Forecasting and the Integration of Compound Social Cues

    PubMed Central

    Adams, Reginald B.; Kveraga, Kestutis

    2017-01-01

    For decades the study of social perception was largely compartmentalized by type of social cue: race, gender, emotion, eye gaze, body language, facial expression etc. This was partly due to good scientific practice (e.g., controlling for extraneous variability), and partly due to assumptions that each type of social cue was functionally distinct from others. Herein, we present a functional forecast approach to understanding compound social cue processing that emphasizes the importance of shared social affordances across various cues (see too Adams, Franklin, Nelson, & Stevenson, 2010; Adams & Nelson, 2011; Weisbuch & Adams, 2012). We review the traditional theories of emotion and face processing that argued for dissociable and noninteracting pathways (e.g., for specific emotional expressions, gaze, identity cues), as well as more recent evidence for combinatorial processing of social cues. We argue here that early, and presumably reflexive, visual integration of such cues is necessary for adaptive behavioral responding to others. In support of this claim, we review contemporary work that reveals a flexible visual system, one that readily incorporates meaningful contextual influences in even nonsocial visual processing, thereby establishing the functional and neuroanatomical bases necessary for compound social cue integration. Finally, we explicate three likely mechanisms driving such integration. Together, this work implicates a role for cognitive penetrability in visual perceptual abilities that have often been (and in some cases still are) ascribed to direct encapsulated perceptual processes. PMID:29242738

  19. Re-Design and Beat Testing of the Man-Machine Integration Design and Analysis System: MIDAS

    NASA Technical Reports Server (NTRS)

    Shively, R. Jay; Rutkowski, Michael (Technical Monitor)

    1999-01-01

    The Man-machine Design and Analysis System (MIDAS) is a human factors design and analysis system that combines human cognitive models with 3D CAD models and rapid prototyping and simulation techniques. MIDAS allows designers to ask 'what if' types of questions early in concept exploration and development prior to actual hardware development. The system outputs predictions of operator workload, situational awareness and system performance as well as graphical visualization of the cockpit designs interacting with models of the human in a mission scenario. Recently, MIDAS was re-designed to enhance functionality and usability. The goals driving the redesign include more efficient processing, GUI interface, advances in the memory structures, implementation of external vision models and audition. These changes were detailed in an earlier paper. Two Beta test sites with diverse applications have been chosen. One Beta test site is investigating the development of a new airframe and its interaction with the air traffic management system. The second Beta test effort will investigate 3D auditory cueing in conjunction with traditional visual cueing strategies including panel-mounted and heads-up displays. The progress and lessons learned on each of these projects will be discussed.

  20. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  1. Are multiple visual short-term memory storages necessary to explain the retro-cue effect?

    PubMed

    Makovski, Tal

    2012-06-01

    Recent research has shown that change detection performance is enhanced when, during the retention interval, attention is cued to the location of the upcoming test item. This retro-cue advantage has led some researchers to suggest that visual short-term memory (VSTM) is divided into a durable, limited-capacity storage and a more fragile, high-capacity storage. Consequently, performance is poor on the no-cue trials because fragile VSTM is overwritten by the test display and only durable VSTM is accessible under these conditions. In contrast, performance is improved in the retro-cue condition because attention keeps fragile VSTM accessible. The aim of the present study was to test the assumptions underlying this two-storage account. Participants were asked to encode an array of colors for a change detection task involving no-cue and retro-cue trials. A retro-cue advantage was found even when the cue was presented after a visual (Experiment 1) or a central (Experiment 2) interference. Furthermore, the magnitude of the interference was comparable between the no-cue and retro-cue trials. These data undermine the main empirical support for the two-storage account and suggest that the presence of a retro-cue benefit cannot be used to differentiate between different VSTM storages.

  2. Static and Motion-Based Visual Features Used by Airport Tower Controllers: Some Implications for the Design of Remote or Virtual Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion B.

    2011-01-01

    Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.

  3. Heuristics of Reasoning and Analogy in Children's Visual Perspective Taking.

    ERIC Educational Resources Information Center

    Yaniv, Ilan; Shatz, Marilyn

    1990-01-01

    In three experiments, children of three through six years of age were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight was salient. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed to objects facilitated…

  4. Enhancing Interactive Tutorial Effectiveness through Visual Cueing

    ERIC Educational Resources Information Center

    Jamet, Eric; Fernandez, Jonathan

    2016-01-01

    The present study investigated whether learning how to use a web service with an interactive tutorial can be enhanced by cueing. We expected the attentional guidance provided by visual cues to facilitate the selection of information in static screen displays that corresponded to spoken explanations. Unlike most previous studies in this area, we…

  5. The Influence of Alertness on Spatial and Nonspatial Components of Visual Attention

    ERIC Educational Resources Information Center

    Matthias, Ellen; Bublak, Peter; Muller, Hermann J.; Schneider, Werner X.; Krummenacher, Joseph; Finke, Kathrin

    2010-01-01

    Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus…

  6. Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load

    ERIC Educational Resources Information Center

    Santangelo, Valerio; Spence, Charles

    2007-01-01

    We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…

  7. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  8. When they listen and when they watch: Pianists’ use of nonverbal audio and visual cues during duet performance

    PubMed Central

    Goebl, Werner

    2015-01-01

    Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610

  9. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  10. Task demands determine comparison strategy in whole probe change detection.

    PubMed

    Udale, Rob; Farrell, Simon; Kent, Chris

    2018-05-01

    Detecting a change in our visual world requires a process that compares the external environment (test display) with the contents of memory (study display). We addressed the question of whether people strategically adapt the comparison process in response to different decision loads. Study displays of 3 colored items were presented, followed by 'whole-display' probes containing 3 colored shapes. Participants were asked to decide whether any probed items contained a new feature. In Experiments 1-4, irrelevant changes to the probed item's locations or feature bindings influenced memory performance, suggesting that participants employed a comparison process that relied on spatial locations. This finding occurred irrespective of whether participants were asked to decide about the whole display, or only a single cued item within the display. In Experiment 5, when the base-rate of changes in the nonprobed items increased (increasing the incentive to use the cue effectively), participants were not influenced by irrelevant changes in location or feature bindings. In addition, we observed individual differences in the use of spatial cues. These results suggest that participants can flexibly switch between spatial and nonspatial comparison strategies, depending on interactions between individual differences and task demand factors. These findings have implications for models of visual working memory that assume that the comparison between study and test obligatorily relies on accessing visual features via their binding to location. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. An Eye Tracking Comparison of External Pointing Cues and Internal Continuous Cues in Learning with Complex Animations

    ERIC Educational Resources Information Center

    Boucheix, Jean-Michel; Lowe, Richard K.

    2010-01-01

    Two experiments used eye tracking to investigate a novel cueing approach for directing learner attention to low salience, high relevance aspects of a complex animation. In the first experiment, comprehension of a piano mechanism animation containing spreading-colour cues was compared with comprehension obtained with arrow cues or no cues. Eye…

  12. Forgotten but Not Gone: Retro-Cue Costs and Benefits in a Double-Cueing Paradigm Suggest Multiple States in Visual Short-Term Memory

    ERIC Educational Resources Information Center

    van Moorselaar, Dirk; Olivers, Christian N. L.; Theeuwes, Jan; Lamme, Victor A. F.; Sligte, Ilja G.

    2015-01-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM…

  13. Task-relevant information is prioritized in spatiotemporal contextual cueing.

    PubMed

    Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun

    2016-11-01

    Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.

  14. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  15. Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention

    PubMed Central

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.

    2012-01-01

    In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118

  16. Peripheral Visual Cues: Their Fate in Processing and Effects on Attention and Temporal-Order Perception.

    PubMed

    Tünnermann, Jan; Scharlau, Ingrid

    2016-01-01

    Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.

  17. Dissociating emotion-induced blindness and hypervision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-12-01

    Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.

  18. The disassociation of visual and acoustic conspecific cues decreases discrimination by female zebra finches (Taeniopygia guttata).

    PubMed

    Campbell, Dana L M; Hauber, Mark E

    2009-08-01

    Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.

  19. Neural substrates of resisting craving during cigarette cue exposure.

    PubMed

    Brody, Arthur L; Mandelkern, Mark A; Olmstead, Richard E; Jou, Jennifer; Tiongson, Emmanuelle; Allen, Valerie; Scheibal, David; London, Edythe D; Monterosso, John R; Tiffany, Stephen T; Korb, Alex; Gan, Joanna J; Cohen, Mark S

    2007-09-15

    In cigarette smokers, the most commonly reported areas of brain activation during visual cigarette cue exposure are the prefrontal, anterior cingulate, and visual cortices. We sought to determine changes in brain activity in response to cigarette cues when smokers actively resist craving. Forty-two tobacco-dependent smokers underwent functional magnetic resonance imaging, during which they were presented with videotaped cues. Three cue presentation conditions were tested: cigarette cues with subjects allowing themselves to crave (cigarette cue crave), cigarette cues with the instruction to resist craving (cigarette cue resist), and matched neutral cues. Activation was found in the cigarette cue resist (compared with the cigarette cue crave) condition in the left dorsal anterior cingulate cortex (ACC), posterior cingulate cortex (PCC), and precuneus. Lower magnetic resonance signal for the cigarette cue resist condition was found in the cuneus bilaterally, left lateral occipital gyrus, and right postcentral gyrus. These relative activations and deactivations were more robust when the cigarette cue resist condition was compared with the neutral cue condition. Suppressing craving during cigarette cue exposure involves activation of limbic (and related) brain regions and deactivation of primary sensory and motor cortices.

  20. Response of hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo to visual and chemical cues arising from prey.

    PubMed

    Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M

    2009-01-01

    Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.

  1. Visual spatial cue use for guiding orientation in two-to-three-year-old children

    PubMed Central

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2–3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences. PMID:24368903

  2. Visual spatial cue use for guiding orientation in two-to-three-year-old children.

    PubMed

    van den Brink, Danielle; Janzen, Gabriele

    2013-01-01

    In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.

  3. A treat for the eyes. An eye-tracking study on children's attention to unhealthy and healthy food cues in media content.

    PubMed

    Spielvogel, Ines; Matthes, Jörg; Naderer, Brigitte; Karsay, Kathrin

    2018-06-01

    Based on cue reactivity theory, food cues embedded in media content can lead to physiological and psychological responses in children. Research suggests that unhealthy food cues are represented more extensively and interactively in children's media environments than healthy ones. However, it is not clear to this date whether children react differently to unhealthy compared to healthy food cues. In an experimental study with 56 children (55.4% girls; M age  = 8.00, SD = 1.58), we used eye-tracking to determine children's attention to unhealthy and healthy food cues embedded in a narrative cartoon movie. Besides varying the food type (i.e., healthy vs. unhealthy), we also manipulated the integration levels of food cues with characters (i.e., level of food integration; no interaction vs. handling vs. consumption), and we assessed children's individual susceptibility factors by measuring the impact of their hunger level. Our results indicated that unhealthy food cues attract children's visual attention to a larger extent than healthy cues. However, their initial visual interest did not differ between unhealthy and healthy food cues. Furthermore, an increase in the level of food integration led to an increase in visual attention. Our findings showed no moderating impact of hunger. We conclude that especially unhealthy food cues with an interactive connection trigger cue reactivity in children. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Validating Visual Cues In Flight Simulator Visual Displays

    NASA Astrophysics Data System (ADS)

    Aronson, Moses

    1987-09-01

    Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.

  5. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A Model of Manual Control with Perspective Scene Viewing

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara Townsend

    2013-01-01

    A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).

  7. The language used in describing autobiographical memories prompted by life period visually presented verbal cues, event-specific visually presented verbal cues and short musical clips of popular music.

    PubMed

    Zator, Krysten; Katz, Albert N

    2017-07-01

    Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.

  8. Externalizing proneness and brain response during pre-cuing and viewing of emotional pictures

    PubMed Central

    Foell, Jens; Brislin, Sarah J.; Strickland, Casey M.; Seo, Dongju; Sabatinelli, Dean

    2016-01-01

    Externalizing proneness, or trait disinhibition, is a concept relevant to multiple high-impact disorders involving impulsive-aggressive behavior. Its mechanisms remain disputed: major models posit hyperresponsive reward circuitry or heightened threat-system reactivity as sources of disinhibitory tendencies. This study evaluated alternative possibilities by examining relations between trait disinhibition and brain reactivity during preparation for and processing of visual affective stimuli. Forty females participated in a functional neuroimaging procedure with stimuli presented in blocks containing either pleasurable or aversive pictures interspersed with neutral, with each picture preceded by a preparation signal. Preparing to view elicited activation in regions including nucleus accumbens, whereas visual regions and bilateral amygdala were activated during viewing of emotional pictures. High disinhibition predicted reduced nucleus accumbens activation during preparation within pleasant/neutral picture blocks, along with enhanced amygdala reactivity during viewing of pleasant and aversive pictures. Follow-up analyses revealed that the augmented amygdala response was related to reduced preparatory activation. Findings indicate that participants high in disinhibition are less able to process implicit cues and mentally prepare for upcoming stimuli, leading to limbic hyperreactivity during processing of actual stimuli. This outcome is helpful for integrating findings from studies suggesting reward-system hyperreactivity and others suggesting threat-system hyperreactivity as mechanisms for externalizing proneness. PMID:26113614

  9. Simon Effect with and without Awareness of the Accessory Stimulus

    ERIC Educational Resources Information Center

    Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena

    2006-01-01

    The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…

  10. Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors

    ERIC Educational Resources Information Center

    Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.

    2014-01-01

    With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…

  11. Atypical Visual Orienting to Gaze- and Arrow-Cues in Adults with High Functioning Autism

    ERIC Educational Resources Information Center

    Vlamings, Petra H. J. M.; Stauder, Johannes E. A.; van Son, Ilona A. M.; Mottron, Laurent

    2005-01-01

    The present study investigates visual orienting to directional cues (arrow or eyes) in adults with high functioning autism (n = 19) and age matched controls (n = 19). A choice reaction time paradigm is used in which eye-or arrow direction correctly (congruent) or incorrectly (incongruent) cues target location. In typically developing participants,…

  12. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  13. Memory for Drug Related Visual Stimuli in Young Adult, Cocaine Dependent Polydrug Users

    PubMed Central

    Ray, Suchismita; Pandina, Robert; Bates, Marsha E.

    2015-01-01

    Background and Objectives Implicit (unconscious) and explicit (conscious) memory associations with drugs have been examined primarily using verbal cues. However, drug seeking, drug use behaviors, and relapse in chronic cocaine and other drug users are frequently triggered by viewing substance related visual cues in the environment. We thus examined implicit and explicit memory for drug picture cues to understand the relative extent to which conscious and unconscious memory facilitation of visual drug cues occurs during cocaine dependence. Methods Memory for drug related and neutral picture cues was assessed in 14 inpatient cocaine dependent polydrug users and a comparison group of 21 young adults with limited drug experience (N = 35). Participants completed picture cue exposure, free recall and recognition tasks to assess explicit memory, and a repetition priming task to assess implicit memory. Results Drug cues, compared to neutral cues were better explicitly recalled and implicitly primed, and especially so in the cocaine group. In contrast, neutral cues were better explicitly recognized, and especially in the control group. Conclusion Certain forms of explicit and implicit memory for drug cues were enhanced in cocaine users compared to controls when memory was tested a short time following cue exposure. Enhanced unconscious memory processing of drug cues in chronic cocaine users may be a behavioral manifestation of heightened drug cue salience that supports drug seeking and taking. There may be value in expanding intervention techniques to utilize cocaine users’ implicit memory system. PMID:24588421

  14. Food and conspecific chemical cues modify visual behavior of zebrafish, Danio rerio.

    PubMed

    Stephenson, Jessica F; Partridge, Julian C; Whitlock, Kathleen E

    2012-06-01

    Animals use the different qualities of olfactory and visual sensory information to make decisions. Ethological and electrophysiological evidence suggests that there is cross-modal priming between these sensory systems in fish. We present the first experimental study showing that ecologically relevant chemical mixtures alter visual behavior, using adult male and female zebrafish, Danio rerio. Neutral-density filters were used to attenuate the light reaching the tank to an initial light intensity of 2.3×10(16) photons/s/m2. Fish were exposed to food cue and to alarm cue. The light intensity was then increased by the removal of one layer of filter (nominal absorbance 0.3) every minute until, after 10 minutes, the light level was 15.5×10(16) photons/s/m2. Adult male and female zebrafish responded to a moving visual stimulus at lower light levels if they had been first exposed to food cue, or to conspecific alarm cue. These results suggest the need for more integrative studies of sensory biology.

  15. Visuo-vestibular interaction: predicting the position of a visual target during passive body rotation.

    PubMed

    Mackrous, I; Simoneau, M

    2011-11-10

    Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Simulation Evaluation of Equivalent Vision Technologies for Aerospace Operations

    NASA Technical Reports Server (NTRS)

    Kramer, Lynda J.; Williams, Steven P.; Wilz, Susan J.; Arthur, Jarvis J.

    2009-01-01

    A fixed-based simulation experiment was conducted in NASA Langley Research Center s Integration Flight Deck simulator to investigate enabling technologies for equivalent visual operations (EVO) in the emerging Next Generation Air Transportation System operating environment. EVO implies the capability to achieve or even improve on the safety of current-day Visual Flight Rules (VFR) operations, maintain the operational tempos of VFR, and perhaps even retain VFR procedures - all independent of the actual weather and visibility conditions. Twenty-four air transport-rated pilots evaluated the use of Synthetic/Enhanced Vision Systems (S/EVS) and eXternal Vision Systems (XVS) technologies as enabling technologies for future all-weather operations. The experimental objectives were to determine the feasibility of XVS/SVS/EVS to provide for all weather (visibility) landing capability without the need (or ability) for a visual approach segment and to determine the interaction of XVS/EVS and peripheral vision cues for terminal area and surface operations. Another key element of the testing investigated the pilot's awareness and reaction to non-normal events (i.e., failure conditions) that were unexpectedly introduced into the experiment. These non-normal runs served as critical determinants in the underlying safety of all-weather operations. Experimental data from this test are cast into performance-based approach and landing standards which might establish a basis for future all-weather landing operations. Glideslope tracking performance appears to have improved with the elimination of the approach visual segment. This improvement can most likely be attributed to the fact that the pilots didn't have to simultaneously perform glideslope corrections and find required visual landing references in order to continue a landing. Lateral tracking performance was excellent regardless of the display concept being evaluated or whether or not there were peripheral cues in the side window. Although workload ratings were significantly less when peripheral cues were present compared to when there were none, these differences appear to be operationally inconsequential. Larger display concepts tested in this experiment showed significant situation awareness (SA) improvements and workload reductions compared to smaller display concepts. With a fixed display size, a color display was more influential in SA and workload ratings than a collimated display.

  17. Perception is key? Does perceptual sensitivity and parenting behavior predict children's reactivity to others' emotions?

    PubMed

    Weeland, Joyce; Van den Akker, Alithe; Slagt, Meike; Putnam, Samuel

    2017-11-01

    When interacting with other people, both children's biological predispositions and past experiences play a role in how they will process and respond to social-emotional cues. Children may partly differ in their reactions to such cues because they differ in the threshold for perceiving such cues in general. Theoretically, perceptual sensitivity (i.e., the amount of detection of slight, low-intensity stimuli from the external environment independent of visual and auditory ability) might, therefore, provide us with specific information on individual differences in susceptibility to the environment. However, the temperament trait of perceptual sensitivity is highly understudied. In an experiment, we tested whether school-aged children's (N=521, 52.5% boys, M age =9.72years, SD=1.51) motor (facial electromyography) and affective (self-report) reactivities to dynamic facial expressions and vocalizations is predicted by their (parent-reported) perceptual sensitivity. Our results indicate that children's perceptual sensitivity predicts their motor reactivity to both happy and angry expressions and vocalizations. In addition, perceptual sensitivity interacted with positive (but not negative) parenting behavior in predicting children's motor reactivity to these emotions. Our findings suggest that perceptual sensitivity might indeed provide us with information on individual differences in reactivity to social-emotional cues, both alone and in interaction with parenting behavior. Because perceptual sensitivity focuses specifically on whether children perceive cues from their environment, and not on whether these cues cause arousal and/or whether children are able to regulate this arousal, it should be considered that perceptual sensitivity lies at the root of such individual differences. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  19. Gaze-contingent reinforcement learning reveals incentive value of social signals in young children and adults

    PubMed Central

    Smith, Tim J.; Senju, Atsushi

    2017-01-01

    While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue–reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue–reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. PMID:28250186

  20. Working memory enhances visual perception: evidence from signal detection analysis.

    PubMed

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W

    2010-03-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.

  1. The association between reading abilities and visual-spatial attention in Hong Kong Chinese children.

    PubMed

    Liu, Sisi; Liu, Duo; Pan, Zhihui; Xu, Zhengye

    2018-03-25

    A growing body of research suggests that visual-spatial attention is important for reading achievement. However, few studies have been conducted in non-alphabetic orthographies. This study extended the current research to reading development in Chinese, a logographic writing system known for its visual complexity. Eighty Hong Kong Chinese children were selected and divided into poor reader and typical reader groups, based on their performance on the measures of reading fluency, Chinese character reading, and reading comprehension. The poor and typical readers were matched on age and nonverbal intelligence. A Posner's spatial cueing task was adopted to measure the exogenous and endogenous orienting of visual-spatial attention. Although the typical readers showed the cueing effect in the central cue condition (i.e., responses to targets following valid cues were faster than those to targets following invalid cues), the poor readers did not respond differently in valid and invalid conditions, suggesting an impairment of the endogenous orienting of attention. The two groups, however, showed a similar cueing effect in the peripheral cue condition, indicating intact exogenous orienting in the poor readers. These findings generally supported a link between the orienting of covert attention and Chinese reading, providing evidence for the attentional-deficit theory of dyslexia. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Time-resolved neuroimaging of visual short term memory consolidation by post-perceptual attention shifts.

    PubMed

    Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan

    2016-01-15

    Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Motivation and short-term memory in visual search: Attention's accelerator revisited.

    PubMed

    Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton

    2018-05-01

    A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Rebalancing Spatial Attention: Endogenous Orienting May Partially Overcome the Left Visual Field Bias in Rapid Serial Visual Presentation.

    PubMed

    Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf

    2017-01-01

    In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.

  5. A systematic comparison between visual cues for boundary detection.

    PubMed

    Mély, David A; Kim, Junkyung; McGill, Mason; Guo, Yuliang; Serre, Thomas

    2016-03-01

    The detection of object boundaries is a critical first step for many visual processing tasks. Multiple cues (we consider luminance, color, motion and binocular disparity) available in the early visual system may signal object boundaries but little is known about their relative diagnosticity and how to optimally combine them for boundary detection. This study thus aims at understanding how early visual processes inform boundary detection in natural scenes. We collected color binocular video sequences of natural scenes to construct a video database. Each scene was annotated with two full sets of ground-truth contours (one set limited to object boundaries and another set which included all edges). We implemented an integrated computational model of early vision that spans all considered cues, and then assessed their diagnosticity by training machine learning classifiers on individual channels. Color and luminance were found to be most diagnostic while stereo and motion were least. Combining all cues yielded a significant improvement in accuracy beyond that of any cue in isolation. Furthermore, the accuracy of individual cues was found to be a poor predictor of their unique contribution for the combination. This result suggested a complex interaction between cues, which we further quantified using regularization techniques. Our systematic assessment of the accuracy of early vision models for boundary detection together with the resulting annotated video dataset should provide a useful benchmark towards the development of higher-level models of visual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  7. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  8. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  9. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    PubMed

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  10. The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.

    PubMed

    English, Brittney A; Howard, Ayanna M

    2017-07-01

    In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.

  11. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    PubMed

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  12. Visual landmarks facilitate rodent spatial navigation in virtual reality environments

    PubMed Central

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484

  13. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  14. Orienting attention within visual short-term memory: development and mechanisms.

    PubMed

    Shimi, Andria; Nobre, Anna C; Astle, Duncan; Scerif, Gaia

    2014-01-01

    How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to encoding or during maintenance. Cues improved memory regardless of their position, but younger children benefited less from cues presented during maintenance, and these benefits related to VSTM span over and above basic memory in uncued trials. In Experiment 2, cues of low validity eliminated benefits, suggesting that even the youngest children use cues voluntarily, rather than automatically. These findings elucidate the close coupling between developing visuospatial attentional control and VSTM. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.

  15. Multimodal retrieval of autobiographical memories: sensory information contributes differently to the recollection of events.

    PubMed

    Willander, Johan; Sikström, Sverker; Karlsson, Kristina

    2015-01-01

    Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.

  16. Are Distal and Proximal Visual Cues Equally Important during Spatial Learning in Mice? A Pilot Study of Overshadowing in the Spatial Domain

    PubMed Central

    Hébert, Marie; Bulla, Jan; Vivien, Denis; Agin, Véronique

    2017-01-01

    Animals use distal and proximal visual cues to accurately navigate in their environment, with the possibility of the occurrence of associative mechanisms such as cue competition as previously reported in honey-bees, rats, birds and humans. In this pilot study, we investigated one of the most common forms of cue competition, namely the overshadowing effect, between visual landmarks during spatial learning in mice. To this end, C57BL/6J × Sv129 mice were given a two-trial place recognition task in a T-maze, based on a novelty free-choice exploration paradigm previously developed to study spatial memory in rodents. As this procedure implies the use of different aspects of the environment to navigate (i.e., mice can perceive from each arm of the maze), we manipulated the distal and proximal visual landmarks during both the acquisition and retrieval phases. Our prospective findings provide a first set of clues in favor of the occurrence of an overshadowing between visual cues during a spatial learning task in mice when both types of cues are of the same modality but at varying distances from the goal. In addition, the observed overshadowing seems to be non-reciprocal, as distal visual cues tend to overshadow the proximal ones when competition occurs, but not vice versa. The results of the present study offer a first insight about the occurrence of associative mechanisms during spatial learning in mice, and may open the way to promising new investigations in this area of research. Furthermore, the methodology used in this study brings a new, useful and easy-to-use tool for the investigation of perceptive, cognitive and/or attentional deficits in rodents. PMID:28634446

  17. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  18. Effects of visual cues of object density on perception and anticipatory control of dexterous manipulation.

    PubMed

    Crajé, Céline; Santello, Marco; Gordon, Andrew M

    2013-01-01

    Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

  19. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  20. Association of Eating Behaviors and Obesity with Psychosocial and Familial Influences

    ERIC Educational Resources Information Center

    Brown, Stephen L.; Schiraldi, Glenn R.; Wrobleski, Peggy P.

    2009-01-01

    Background: Overeating is often attributed to emotions and has been linked to psychological challenges and obesity. Purpose: This study investigated the effect of emotional and external cue eating on obesity and the correlation of emotional and external cue eating with positive and negative psychological factors, as well as early familial eating…

  1. Eating responses to external food cues and internal satiety signals in weight discordant siblings

    USDA-ARS?s Scientific Manuscript database

    Background: Compared to normal-weight children, over-weight children are more responsive to external food cues and less sensitive to internal satiety signals, either of which may facilitate greater energy intake. The ability to compensate for prior kcal intake may decrease with age, with children sh...

  2. Effects of Tactile, Visual, and Auditory Cues About Threat Location on Target Acquisition and Attention to Visual and Auditory Communications

    DTIC Science & Technology

    2006-08-01

    Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition

  3. Enhancing visual search abilities of people with intellectual disabilities.

    PubMed

    Li-Tsang, Cecilia W P; Wong, Jackson K K

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.

  4. Flight simulator with spaced visuals

    NASA Technical Reports Server (NTRS)

    Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)

    1980-01-01

    A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.

  5. Working memory load and the retro-cue effect: A diffusion model account.

    PubMed

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Cue-recruitment for extrinsic signals after training with low information stimuli.

    PubMed

    Jain, Anshul; Fuller, Stuart; Backus, Benjamin T

    2014-01-01

    Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.

  7. The neural correlates of volitional attention: A combined fMRI and ERP study.

    PubMed

    Bengson, Jesse J; Kelley, Todd A; Mangun, George R

    2015-07-01

    Studies of visual-spatial attention typically use instructional cues to direct attention to a relevant location, but in everyday vision, attention is often focused volitionally, in the absence of external signals. Although investigations of cued attention comprise hundreds of behavioral and physiological studies, remarkably few studies of voluntary attention have addressed the challenging question of how spatial attention is initiated and controlled in the absence of external instructions, which we refer to as willed attention. To explore this question, we employed a trial-by-trial spatial attention task using electroencephalography and functional magnetic resonance imaging (fMRI). The fMRI results reveal a unique network of brain regions for willed attention that includes the anterior cingulate cortex, left middle frontal gyrus (MFG), and the left and right anterior insula (AI). We also observed two event-related potentials (ERPs) associated with willed attention; one with a frontal distribution occurring 250-350 ms postdecision cue onset (EWAC: Early Willed Attention Component), and another occurring between 400 and 800 ms postdecision-cue onset (WAC: Willed Attention Component). In addition, each ERP component uniquely correlated across subjects with different willed attention-specific sites of BOLD activation. The EWAC was correlated with the willed attention-specific left AI and left MFG activations and the later WAC was correlated only with left AI. These results offer a comprehensive and novel view of the electrophysiological and anatomical profile of willed attention and further illustrate the relationship between scalp-recorded ERPs and the BOLD response. © 2015 Wiley Periodicals, Inc.

  8. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss.

    PubMed

    Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly

    2017-08-16

    This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

  9. Signal enhancement, not active suppression, follows the contingent capture of visual attention.

    PubMed

    Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J

    2017-02-01

    Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements

    PubMed Central

    Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512

  11. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  13. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  14. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Toward a New Theory for Selecting Instructional Visuals.

    ERIC Educational Resources Information Center

    Croft, Richard S.; Burton, John K.

    This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…

  16. Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments

    ERIC Educational Resources Information Center

    Youngstrom, Isaac A.; Strowbridge, Ben W.

    2012-01-01

    Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…

  17. Design Criteria for Visual Cues Used in Disruptive Learning Interventions within Sustainability Education

    ERIC Educational Resources Information Center

    Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão

    2017-01-01

    This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…

  18. Comparison of the visual perception of a runway model in pilots and nonpilots during simulated night landing approaches.

    DOT National Transportation Integrated Search

    1978-03-01

    At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...

  19. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  20. Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.

    PubMed

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2014-04-01

    Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Alternative mechanisms for regulating racial responses according to internal vs external cues.

    PubMed

    Amodio, David M; Kubota, Jennifer T; Harmon-Jones, Eddie; Devine, Patricia G

    2006-06-01

    Personal (internal) and normative (external) impetuses for regulating racially biased behaviour are well-documented, yet the extent to which internally and externally driven regulatory processes arise from the same mechanism is unknown. Whereas the regulation of race bias according to internal cues has been associated with conflict-monitoring processes and activation of the dorsal anterior cingulate cortex (dACC), we proposed that responses regulated according to external cues to respond without prejudice involves mechanisms of error-perception, a process associated with rostral anterior cingulate cortex (rACC) activity. We recruited low-prejudice participants who reported high or low sensitivity to non-prejudiced norms, and participants completed a stereotype inhibition task in private or public while electroencephalography was recorded. Analysis of event-related potentials revealed that the error-related negativity component, linked to dACC activity, predicted behavioural control of bias across conditions, whereas the error-perception component, linked to rACC activity, predicted control only in public among participants sensitive to external pressures to respond without prejudice.

  2. Non-conscious visual cues related to affect and action alter perception of effort and endurance performance

    PubMed Central

    Blanchfield, Anthony; Hardy, James; Marcora, Samuele

    2014-01-01

    The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014

  3. 'You see?' Teaching and learning how to interpret visual cues during surgery.

    PubMed

    Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei

    2015-11-01

    The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.

  4. The role of lower peripheral visual cues in the visuomotor coordination of locomotion and prehension.

    PubMed

    Graci, Valentina

    2011-10-01

    It has been previously suggested that coupled upper and limb movements need visuomotor coordination to be achieved. Previous studies have not investigated the role that visual cues may play in the coordination of locomotion and prehension. The aim of this study was to investigate if lower peripheral visual cues provide online control of the coordination of locomotion and prehension as they have been showed to do during adaptive gait and level walking. Twelve subjects reached a semi-empty or a full glass with their dominant or non-dominant hand at gait termination. Two binocular visual conditions were investigated: normal vision and lower visual occlusion. Outcome measures were determined using 3D motion capture techniques. Results showed that although the subjects were able to successfully complete the task without spilling the water from the glass under lower visual occlusion, they increased the margin of safety between final foot placements and glass. These findings suggest that lower visual cues are mainly used online to fine tune the trajectory of the upper and lower limbs moving toward the target. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  6. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  7. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  8. The redbay ambrosia beetle (Coleoptera: Curculionidae: Scolytinae) uses stem silhouette diameter as a visual host-finding cue

    Treesearch

    Albert (Bud) Mayfield; Cavell Brownie

    2013-01-01

    The redbay ambrosia beetle (Syleborus glabratus Eichhoff) is an invasive pest and vector of the pathogen that causes laurel wilt disease in Lauraceous tree species in the eastern United States. This insect uses olfactory cues during host finding, but use of visual cues by X. Glabratus has not been previously investigated and may help explain diameter...

  9. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    PubMed

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. All I saw was the cake. Hunger effects on attentional capture by visual food cues.

    PubMed

    Piech, Richard M; Pastorino, Michael T; Zald, David H

    2010-06-01

    While effects of hunger on motivation and food reward value are well-established, far less is known about the effects of hunger on cognitive processes. Here, we deployed the emotional blink of attention paradigm to investigate the impact of visual food cues on attentional capture under conditions of hunger and satiety. Participants were asked to detect targets which appeared in a rapid visual stream after different types of task irrelevant distractors. We observed that food stimuli acquired increased power to capture attention and prevent target detection when participants were hungry. This occurred despite monetary incentives to perform well. Our findings suggest an attentional mechanism through which hunger heightens perception of food cues. As an objective behavioral marker of the attentional sensitivity to food cues, the emotional attentional blink paradigm may provide a useful technique for studying individual differences, and state manipulations in the sensitivity to food cues. Published by Elsevier Ltd.

  11. Cues used by the black fly, Simulium annulus, for attraction to the common loon (Gavia immer).

    PubMed

    Weinandt, Meggin L; Meyer, Michael; Strand, Mac; Lindsay, Alec R

    2012-12-01

    The parasitic relationship between a black fly, Simulium annulus, and the common loon (Gavia immer) has been considered one of the most exclusive relationships between any host species and a black fly species. To test the host specificity of this blood-feeding insect, we made a series of bird decoy presentations to black flies on loon-inhabited lakes in northern Wisconsin, U.S.A. To examine the importance of chemical and visual cues for black fly detection of and attraction to hosts, we made decoy presentations with and without chemical cues. Flies attracted to the decoys were collected, identified to species, and quantified. Results showed that S. annulus had a strong preference for common loon visual and chemical cues, although visual cues from Canada geese (Branta canadensis) and mallards (Anas platyrynchos) did attract some flies in significantly smaller numbers. © 2012 The Society for Vector Ecology.

  12. Working memory can enhance unconscious visual perception.

    PubMed

    Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying

    2012-06-01

    We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.

  13. The Effect of Eye Contact Is Contingent on Visual Awareness

    PubMed Central

    Xu, Shan; Zhang, Shen; Geng, Haiyan

    2018-01-01

    The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1) or subliminal (Experiment 2) eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition. PMID:29467703

  14. Evidence for impairments in using static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism.

    PubMed

    Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B

    2008-09-01

    We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.

  15. Food Avoidance Learning in Squirrel Monkeys and Common Marmosets

    PubMed Central

    Laska, Matthias; Metzker, Karin

    1998-01-01

    Using a conditioned food avoidance learning paradigm, six squirrel monkeys (Saimiri sciureus) and six common marmosets (Callithrix jacchus) were tested for their ability to (1) reliably form associations between visual or olfactory cues of a potential food and its palatability and (2) remember such associations over prolonged periods of time. We found (1) that at the group level both species showed one-trial learning with the visual cues color and shape, whereas only the marmosets were able to do so with the olfactory cue, (2) that all individuals from both species learned to reliably avoid the unpalatable food items within 10 trials, (3) a tendency in both species for quicker acquisition of the association with the visual cues compared with the olfactory cue, (4) a tendency for quicker acquisition and higher reliability of the aversion by the marmosets compared with the squirrel monkeys, and (5) that all individuals from both species were able to reliably remember the significance of the visual cues, color and shape, even after 4 months, whereas only the marmosets showed retention of the significance of the olfactory cues for up to 4 weeks. Furthermore, the results suggest that in both species tested, illness is not a necessary prerequisite for food avoidance learning but that the presumably innate rejection responses toward highly concentrated but nontoxic bitter and sour tastants are sufficient to induce robust learning and retention. PMID:10454364

  16. Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.

    PubMed

    Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R

    2014-01-01

    Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.

  17. Sensitivity of Locus Ceruleus Neurons to Reward Value for Goal-Directed Actions

    PubMed Central

    Richmond, Barry J.

    2015-01-01

    The noradrenergic nucleus locus ceruleus (LC) is associated classically with arousal and attention. Recent data suggest that it might also play a role in motivation. To study how LC neuronal responses are related to motivational intensity, we recorded 121 single neurons from two monkeys while reward size (one, two, or four drops) and the manner of obtaining reward (passive vs active) were both manipulated. The monkeys received reward under three conditions: (1) releasing a bar when a visual target changed color; (2) passively holding a bar; or (3) touching and releasing a bar. In the first two conditions, a visual cue indicated the size of the upcoming reward, and, in the third, the reward was constant through each block of 25 trials. Performance levels and lipping intensity (an appetitive behavior) both showed that the monkeys' motivation in the task was related to the predicted reward size. In conditions 1 and 2, LC neurons were activated phasically in relation to cue onset, and this activation strengthened with increasing expected reward size. In conditions 1 and 3, LC neurons were activated before the bar-release action, and the activation weakened with increasing expected reward size but only in task 1. These effects evolved as monkeys progressed through behavioral sessions, because increasing fatigue and satiety presumably progressively decreased the value of the upcoming reward. These data indicate that LC neurons integrate motivationally relevant information: both external cues and internal drives. The LC might provide the impetus to act when the predicted outcome value is low. PMID:25740528

  18. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    PubMed Central

    Lupyan, Gary; Spivey, Michael J.

    2010-01-01

    Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646

  19. Integration of visual and motion cues for flight simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.

  20. Compensatory shifts in visual perception are associated with hallucinations in Lewy body disorders.

    PubMed

    Bowman, Alan Robert; Bruce, Vicki; Colbourn, Christopher J; Collerton, Daniel

    2017-01-01

    Visual hallucinations are a common, distressing, and disabling symptom of Lewy body and other diseases. Current models suggest that interactions in internal cognitive processes generate hallucinations. However, these neglect external factors. Pareidolic illusions are an experimental analogue of hallucinations. They are easily induced in Lewy body disease, have similar content to spontaneous hallucinations, and respond to cholinesterase inhibitors in the same way. We used a primed pareidolia task with hallucinating participants with Lewy body disorders (n = 16), non-hallucinating participants with Lewy body disorders (n = 19), and healthy controls (n = 20). Participants were presented with visual "noise" that sometimes contained degraded visual objects and were required to indicate what they saw. Some perceptions were cued in advance by a visual prime. Results showed that hallucinating participants were impaired in discerning visual signals from noise, with a relaxed criterion threshold for perception compared to both other groups. After the presentation of a visual prime, the criterion was comparable to the other groups. The results suggest that participants with hallucinations compensate for perceptual deficits by relaxing perceptual criteria, at a cost of seeing things that are not there, and that visual cues regularize perception. This latter finding may provide a mechanism for understanding the interaction between environments and hallucinations.

  1. Smell or vision? The use of different sensory modalities in predator discrimination.

    PubMed

    Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara

    2017-01-01

    Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.

  2. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  3. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  4. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  5. Cue generation: How learners flexibly support future retrieval.

    PubMed

    Tullis, Jonathan G; Benjamin, Aaron S

    2015-08-01

    The successful use of memory requires us to be sensitive to the cues that will be present during retrieval. In many situations, we have some control over the external cues that we will encounter. For instance, learners create shopping lists at home to help remember what items to later buy at the grocery store, and they generate computer file names to help remember the contents of those files. Generating cues in the service of later cognitive goals is a complex task that lies at the intersection of metacognition, communication, and memory. In this series of experiments, we investigated how and how well learners generate external mnemonic cues. Across 5 experiments, learners generated a cue for each target word in a to-be-remembered list and received these cues during a later cued recall test. Learners flexibly generated cues in response to different instructional demands and study list compositions. When generating mnemonic cues, as compared to descriptions of target items, learners produced cues that were more distinct than mere descriptions and consequently elicited greater cued recall performance than those descriptions. When learners were aware of competing targets in the study list, they generated mnemonic cues with smaller cue-to-target associative strength but that were even more distinct. These adaptations led to fewer confusions among competing targets and enhanced cued recall performance. These results provide another example of the metacognitively sophisticated tactics that learners use to effectively support future retrieval.

  6. Distraction Effects of Smoking Cues in Antismoking Messages: Examining Resource Allocation to Message Processing as a Function of Smoking Cues and Argument Strength

    PubMed Central

    Lee, Sungkyoung; Cappella, Joseph N.

    2014-01-01

    Findings from previous studies on smoking cues and argument strength in antismoking messages have shown that the presence of smoking cues undermines the persuasiveness of antismoking public service announcements (PSAs) with weak arguments. This study conceptualized smoking cues (i.e., scenes showing smoking-related objects and behaviors) as stimuli motivationally relevant to the former smoker population and examined how smoking cues influence former smokers’ processing of antismoking PSAs. Specifically, by defining smoking cues and the strength of antismoking arguments in terms of resource allocation, this study examined former smokers’ recognition accuracy, memory strength, and memory judgment of visual (i.e., scenes excluding smoking cues) and audio information from antismoking PSAs. In line with previous findings, the results of the study showed that the presence of smoking cues undermined former smokers’ encoding of antismoking arguments, which includes the visual and audio information that compose the main content of antismoking messages. PMID:25477766

  7. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  8. Do cattle (Bos taurus) retain an association of a visual cue with a food reward for a year?

    PubMed

    Hirata, Masahiko; Takeno, Nozomi

    2014-06-01

    Use of visual cues to locate specific food resources from a distance is a critical ability of animals foraging in a spatially heterogeneous environment. However, relatively little is known about how long animals can retain the learned cue-reward association without reinforcement. We compared feeding behavior of experienced and naive Japanese Black cows (Bos taurus) in discovering food locations in a pasture. Experienced animals had been trained to respond to a visual cue (plastic washtub) for a preferred food (grain-based concentrate) 1 year prior to the experiment, while naive animals had no exposure to the cue. Cows were tested individually in a test arena including tubs filled with the concentrate on three successive days (Days 1-3). Experienced cows located the first tub more quickly and visited more tubs than naive cows on Day 1 (usually P < 0.05), but these differences disappeared on Days 2 and 3. The performance of experienced cows tended to increase from Day 1 to Day 2 and level off thereafter. Our results suggest that Japanese Black cows can associate a visual cue with a food reward within a day and retain the association for 1 year despite a slight decay. © 2014 Japanese Society of Animal Science.

  9. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  10. Object based implicit contextual learning: a study of eye movements.

    PubMed

    van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel

    2011-02-01

    Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.

  11. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

    PubMed Central

    Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly

    2017-01-01

    Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550

  12. Tailored information for cancer patients on the Internet: effects of visual cues and language complexity on information recall and satisfaction.

    PubMed

    van Weert, Julia C M; van Noort, Guda; Bol, Nadine; van Dijk, Liset; Tates, Kiek; Jansen, Jesse

    2011-09-01

    This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. An experiment using a 2 (complex vs. non-complex language)×3 (text only vs. photograph vs. drawing) factorial design was conducted. In total, 200 respondents without cancer were exposed to one of the six conditions. Respondents were more satisfied with the comprehensibility of both websites when they were presented with a visual cue. A significant interaction effect was found between language complexity and photograph use such that satisfaction with comprehensibility improved when a photograph was added to the complex language condition. Next, an interaction effect was found between age and satisfaction, which indicates that adding a visual cue is more important for older adults than younger adults. Finally, respondents who were exposed to a website with less complex language showed higher recall scores. The use of visual cues enhances satisfaction with the information presented on the website, and the use of non-complex language improves recall. The results of the current study can be used to improve computer-based information systems for patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Working memory dependence of spatial contextual cueing for visual search.

    PubMed

    Pollmann, Stefan

    2018-05-10

    When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.

  14. Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks.

    PubMed

    Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2012-10-25

    In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.

  15. Modulation of attentional networks by food-related disinhibition.

    PubMed

    Hege, Maike A; Stingl, Krunoslav T; Veit, Ralf; Preissl, Hubert

    2017-07-01

    The risk of weight gain is especially related to disinhibition, which indicates the responsiveness to external food stimuli with associated disruptions in eating control. We adapted a food-related version of the attention network task and used functional magnetic resonance imaging to study the effects of disinhibition on attentional networks in 19 normal-weight participants. High disinhibition scores were associated with a rapid reorienting response to food pictures after invalid cueing and with an enhanced alerting effect of a warning cue signalizing the upcoming appearance of a food picture. Imaging data revealed activation of a right-lateralized ventral attention network during reorienting. The faster the reorienting and the higher the disinhibition score, the less activation of this network was observed. The alerting contrast showed activation in visual, temporo-parietal and anterior sites. These modulations of attentional networks by food-related disinhibition might be related to an attentional bias to energy dense and palatable food and increased intake of food in disinhibited individuals. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  17. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  18. Floral reward, advertisement and attractiveness to honey bees in dioecious Salix caprea.

    PubMed

    Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor

    2014-01-01

    In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants.

  19. Field Assessment of the Predation Risk - Food Availability Trade-Off in Crab Megalopae Settlement

    PubMed Central

    Tapia-Lewin, Sebastián; Pardo, Luis Miguel

    2014-01-01

    Settlement is a key process for meroplanktonic organisms as it determines distribution of adult populations. Starvation and predation are two of the main mortality causes during this period; therefore, settlement tends to be optimized in microhabitats with high food availability and low predator density. Furthermore, brachyuran megalopae actively select favorable habitats for settlement, via chemical, visual and/or tactile cues. The main objective in this study was to assess the settlement of Metacarcinus edwardsii and Cancer plebejus under different combinations of food availability levels and predator presence. We determined, in the field, which factor is of greater relative importance when choosing a suitable microhabitat for settling. Passive larval collectors were deployed, crossing different scenarios of food availability and predator presence. We also explore if megalopae actively choose predator-free substrates in response to visual and/or chemical cues. We tested the response to combined visual and chemical cues and to each individually. Data was tested using a two-way factorial design ANOVA. In both species, food did not cause significant effect on settlement success, but predator presence did, therefore there was not trade-off in this case and megalopae respond strongly to predation risk by active aversion. Larvae of M. edwardsii responded to chemical and visual cues simultaneously, but there was no response to either cue by itself. Statistically, C. plebejus did not exhibit a differential response to cues, but reacted with a strong similar tendency as M. edwardsii. We concluded that crab megalopae actively select predator-free microhabitat, independently of food availability, using chemical and visual cues combined. The findings in this study highlight the great relevance of predation on the settlement process and recruitment of marine invertebrates with complex life cycles. PMID:24748151

  20. Floral Reward, Advertisement and Attractiveness to Honey Bees in Dioecious Salix caprea

    PubMed Central

    Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor

    2014-01-01

    In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants. PMID:24676333

  1. Visual cues that are effective for contextual saccade adaptation

    PubMed Central

    Azadi, Reza

    2014-01-01

    The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429

  2. Retro-dimension-cue benefit in visual working memory.

    PubMed

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-10-24

    In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.

  3. Retro-dimension-cue benefit in visual working memory

    PubMed Central

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-01-01

    In visual working memory (VWM) tasks, participants’ performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants’ performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis. PMID:27774983

  4. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  5. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  6. Retrospective attention enhances visual working memory in the young but not the old: an ERP study

    PubMed Central

    Duarte, Audrey; Hearons, Patricia; Jiang, Yashu; Delvin, Mary Courtney; Newsome, Rachel N.; Verhaeghen, Paul

    2013-01-01

    Behavioral evidence from the young suggests spatial cues that orient attention toward task relevant items in visual working memory (VWM) enhance memory capacity. Whether older adults can also use retrospective cues (“retro-cues”) to enhance VWM capacity is unknown. In the current event-related potential (ERP) study, young and old adults performed a VWM task in which spatially informative retro-cues were presented during maintenance. Young but not older adults’ VWM capacity benefitted from retro-cueing. The contralateral delay activity (CDA) ERP index of VWM maintenance was attenuated after the retro-cue, which effectively reduced the impact of memory load. CDA amplitudes were reduced prior to retro-cue onset in the old only. Despite a preserved ability to delete items from VWM, older adults may be less able to use retrospective attention to enhance memory capacity when expectancy of impending spatial cues disrupts effective VWM maintenance. PMID:23445536

  7. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Immersive cyberspace system

    NASA Technical Reports Server (NTRS)

    Park, Brian V. (Inventor)

    1997-01-01

    An immersive cyberspace system is presented which provides visual, audible, and vibrational inputs to a subject remaining in neutral immersion, and also provides for subject control input. The immersive cyberspace system includes a relaxation chair and a neutral immersion display hood. The relaxation chair supports a subject positioned thereupon, and places the subject in position which merges a neutral body position, the position a body naturally assumes in zero gravity, with a savasana yoga position. The display hood, which covers the subject's head, is configured to produce light images and sounds. An image projection subsystem provides either external or internal image projection. The display hood includes a projection screen moveably attached to an opaque shroud. A motion base supports the relaxation chair and produces vibrational inputs over a range of about 0-30 Hz. The motion base also produces limited translation and rotational movements of the relaxation chair. These limited translational and rotational movements, when properly coordinated with visual stimuli, constitute motion cues which create sensations of pitch, yaw, and roll movements. Vibration transducers produce vibrational inputs from about 20 Hz to about 150 Hz. An external computer, coupled to various components of the immersive cyberspace system, executes a software program and creates the cyberspace environment. One or more neutral hand posture controllers may be coupled to the external computer system and used to control various aspects of the cyberspace environment, or to enter data during the cyberspace experience.

  9. Externalizing proneness and brain response during pre-cuing and viewing of emotional pictures.

    PubMed

    Foell, Jens; Brislin, Sarah J; Strickland, Casey M; Seo, Dongju; Sabatinelli, Dean; Patrick, Christopher J

    2016-07-01

    Externalizing proneness, or trait disinhibition, is a concept relevant to multiple high-impact disorders involving impulsive-aggressive behavior. Its mechanisms remain disputed: major models posit hyperresponsive reward circuitry or heightened threat-system reactivity as sources of disinhibitory tendencies. This study evaluated alternative possibilities by examining relations between trait disinhibition and brain reactivity during preparation for and processing of visual affective stimuli. Forty females participated in a functional neuroimaging procedure with stimuli presented in blocks containing either pleasurable or aversive pictures interspersed with neutral, with each picture preceded by a preparation signal. Preparing to view elicited activation in regions including nucleus accumbens, whereas visual regions and bilateral amygdala were activated during viewing of emotional pictures. High disinhibition predicted reduced nucleus accumbens activation during preparation within pleasant/neutral picture blocks, along with enhanced amygdala reactivity during viewing of pleasant and aversive pictures. Follow-up analyses revealed that the augmented amygdala response was related to reduced preparatory activation. Findings indicate that participants high in disinhibition are less able to process implicit cues and mentally prepare for upcoming stimuli, leading to limbic hyperreactivity during processing of actual stimuli. This outcome is helpful for integrating findings from studies suggesting reward-system hyperreactivity and others suggesting threat-system hyperreactivity as mechanisms for externalizing proneness. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Specific and nonspecific neural activity during selective processing of visual representations in working memory.

    PubMed

    Oh, Hwamee; Leung, Hoi-Chung

    2010-02-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.

  11. Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2018-04-25

    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.

  12. Visual cues for data mining

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  13. Contextual cueing: implicit learning and memory of visual context guides spatial attention.

    PubMed

    Chun, M M; Jiang, Y

    1998-06-01

    Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.

  14. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  15. Two (or three) is one too many: testing the flexibility of contextual cueing with multiple target locations.

    PubMed

    Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J

    2011-10-01

    Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.

  16. Perceiving Prosody from the Face and Voice: Distinguishing Statements from Echoic Questions in English.

    ERIC Educational Resources Information Center

    Srinivasan, Ravindra J.; Massaro, Dominic W.

    2003-01-01

    Examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. Found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. (Author/VWL)

  17. Vestibular-visual interactions in flight simulators

    NASA Technical Reports Server (NTRS)

    Clark, B.

    1977-01-01

    The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.

  18. Cue-induced brain activity in pathological gamblers.

    PubMed

    Crockford, David N; Goodyear, Bradley; Edwards, Jodi; Quickfall, Jeremy; el-Guebaly, Nady

    2005-11-15

    Previous studies using functional magnetic resonance imaging (fMRI) have identified differential brain activity in healthy subjects performing gambling tasks and in pathological gambling (PG) subjects when exposed to motivational and emotional predecessors for gambling as well as during gambling or response inhibition tasks. The goal of the present study was to determine if PG subjects exhibit differential brain activity when exposed to visual gambling cues. Ten male DSM-IV-TR PG subjects and 10 matched healthy control subjects underwent fMRI during visual presentations of gambling-related video alternating with video of nature scenes. Pathological gambling subjects and control subjects exhibited overlap in areas of brain activity in response to the visual gambling cues; however, compared with control subjects, PG subjects exhibited significantly greater activity in the right dorsolateral prefrontal cortex (DLPFC), including the inferior and medial frontal gyri, the right parahippocampal gyrus, and left occipital cortex, including the fusiform gyrus. Pathological gambling subjects also reported a significant increase in mean craving for gambling after the study. Post hoc analyses revealed a dissociation in visual processing stream (dorsal vs. ventral) activation by subject group and cue type. These findings may represent a component of cue-induced craving for gambling or conditioned behavior that could underlie pathological gambling.

  19. Depth reversals in stereoscopic displays driven by apparent size

    NASA Astrophysics Data System (ADS)

    Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.

    1998-04-01

    In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.

  20. Sexual selection in the squirrel treefrog Hyla squirella: the role of multimodal cue assessment in female choice

    USGS Publications Warehouse

    Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.

    2007-01-01

    Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.

  1. Modeling human perception and estimation of kinematic responses during aircraft landing

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Silk, Anthony B.

    1988-01-01

    The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.

  2. Age-related changes in event-cued visual and auditory prospective memory proper.

    PubMed

    Uttl, Bob

    2006-06-01

    We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.

  3. Deployment of spatial attention to words in central and peripheral vision.

    PubMed

    Ducrot, Stéphanie; Grainger, Jonathan

    2007-05-01

    Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.

  4. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  5. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.

    PubMed

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-09-01

    At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.

  6. Sequence Effect in Parkinson’s Disease Is Related to Motor Energetic Cost

    PubMed Central

    Tinaz, Sule; Pillai, Ajay S.; Hallett, Mark

    2016-01-01

    Bradykinesia is the most disabling motor symptom of Parkinson’s disease (PD). The sequence effect (SE), a feature of bradykinesia, refers to the rapid decrement in amplitude and speed of repetitive movements (e.g., gait, handwriting) and is a major cause of morbidity in PD. Previous research has revealed mixed results regarding the role of dopaminergic treatment in the SE. However, external cueing has been shown to improve it. In this study, we aimed to characterize the SE systematically and relate this phenomenon to the energetic cost of movement within the context of cost–benefit framework of motor control. We used a dynamic isometric motor task with auditory pacing to assess the SE in motor output during a 15-s task segment in PD patients and matched controls. All participants performed the task with both hands, and without and with visual feedback (VF). Patients were also tested in “on”- and “off”-dopaminergic states. Patients in the “off” state did not show higher SE compared to controls, partly due to large variance in their performance. However, patients in the “on” state and in the absence of VF showed significantly higher SE compared to controls. Patients expended higher total motor energy compared to controls in all conditions and regardless of their medication status. In this experimental situation, the SE in PD is associated with the cumulative energetic cost of movement. Dopaminergic treatment, critical for internal triggering of movement, fails to maintain the motor vigor across responses. The high motor cost may be related to failure to incorporate limbic/motivational cues into the motor plan. VF may facilitate performance by shifting the driving of movement from internal to external or, alternatively, by functioning as a motivational cue. PMID:27252678

  7. Discriminating External and Internal Causes for Heading Changes in Freely Flying Drosophila

    PubMed Central

    Sayaman, Rosalyn W.; Murray, Richard M.; Dickinson, Michael H.

    2013-01-01

    As animals move through the world in search of resources, they change course in reaction to both external sensory cues and internally-generated programs. Elucidating the functional logic of complex search algorithms is challenging because the observable actions of the animal cannot be unambiguously assigned to externally- or internally-triggered events. We present a technique that addresses this challenge by assessing quantitatively the contribution of external stimuli and internal processes. We apply this technique to the analysis of rapid turns (“saccades”) of freely flying Drosophila melanogaster. We show that a single scalar feature computed from the visual stimulus experienced by the animal is sufficient to explain a majority (93%) of the turning decisions. We automatically estimate this scalar value from the observable trajectory, without any assumption regarding the sensory processing. A posteriori, we show that the estimated feature field is consistent with previous results measured in other experimental conditions. The remaining turning decisions, not explained by this feature of the visual input, may be attributed to a combination of deterministic processes based on unobservable internal states and purely stochastic behavior. We cannot distinguish these contributions using external observations alone, but we are able to provide a quantitative bound of their relative importance with respect to stimulus-triggered decisions. Our results suggest that comparatively few saccades in free-flying conditions are a result of an intrinsic spontaneous process, contrary to previous suggestions. We discuss how this technique could be generalized for use in other systems and employed as a tool for classifying effects into sensory, decision, and motor categories when used to analyze data from genetic behavioral screens. PMID:23468601

  8. What You Don't Notice Can Harm You: Age-Related Differences in Detecting Concurrent Visual, Auditory, and Tactile Cues.

    PubMed

    Pitts, Brandon J; Sarter, Nadine

    2018-06-01

    Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.

  9. Motor (but not auditory) attention affects syntactic choice.

    PubMed

    Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy

    2018-01-01

    Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.

  10. Neural coding underlying the cue preference for celestial orientation

    PubMed Central

    el Jundi, Basil; Warrant, Eric J.; Byrne, Marcus J.; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie

    2015-01-01

    Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity. PMID:26305929

  11. Neural coding underlying the cue preference for celestial orientation.

    PubMed

    el Jundi, Basil; Warrant, Eric J; Byrne, Marcus J; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie

    2015-09-08

    Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity.

  12. Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.

    PubMed

    Schankin, Andrea; Schubö, Anna

    2009-05-01

    Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.

  13. Central and peripheral vision loss differentially affects contextual cueing in visual search.

    PubMed

    Geringswald, Franziska; Pollmann, Stefan

    2015-09-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).

  14. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  15. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  16. The Role of Color in Search Templates for Real-world Target Objects.

    PubMed

    Nako, Rebecca; Smith, Tim J; Eimer, Martin

    2016-11-01

    During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.

  17. Lower region: a new cue for figure-ground assignment.

    PubMed

    Vecera, Shaun P; Vogel, Edward K; Woodman, Geoffrey F

    2002-06-01

    Figure-ground assignment is an important visual process; humans recognize, attend to, and act on figures, not backgrounds. There are many visual cues for figure-ground assignment. A new cue to figure-ground assignment, called lower region, is presented: Regions in the lower portion of a stimulus array appear more figurelike than regions in the upper portion of the display. This phenomenon was explored, and it was demonstrated that the lower-region preference is not influenced by contrast, eye movements, or voluntary spatial attention. It was found that the lower region is defined relative to the stimulus display, linking the lower-region preference to pictorial depth perception cues. The results are discussed in terms of the environmental regularities that this new figure-ground cue may reflect.

  18. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    PubMed Central

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2018-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457

  19. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    PubMed

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  20. Sensory modality of smoking cues modulates neural cue reactivity.

    PubMed

    Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J

    2013-01-01

    Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.

  1. Cues for cavity nesters: investigating relevant zeitgebers for emerging leafcutting bees, Megachile rotundata

    USDA-ARS?s Scientific Manuscript database

    Emerging insects rely on external cues to synchronize themselves with the environment. Thermoperiod has been identified as an important cue and may be important for insects that emerge from light-restricted habitats. The alfalfa leafcutting bee, Megachile rotundata, a cavity-nesting bee, undergoes d...

  2. Neural Responses to Visual Food Cues According to Weight Status: A Systematic Review of Functional Magnetic Resonance Imaging Studies

    PubMed Central

    Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110

  3. Neural responses to visual food cues according to weight status: a systematic review of functional magnetic resonance imaging studies.

    PubMed

    Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.

  4. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  5. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  6. Short-term visual memory for location in depth: A U-shaped function of time.

    PubMed

    Reeves, Adam; Lei, Quan

    2017-10-01

    Short-term visual memory was studied by displaying arrays of four or five numerals, each numeral in its own depth plane, followed after various delays by an arrow cue shown in one of the depth planes. Subjects reported the numeral at the depth cued by the arrow. Accuracy fell with increasing cue delay for the first 500 ms or so, and then recovered almost fully. This dipping pattern contrasts with the usual iconic decay observed for memory traces. The dip occurred with or without a verbal or color-shape retention load on working memory. In contrast, accuracy did not change with delay when a tonal cue replaced the arrow cue. We hypothesized that information concerning the depths of the numerals decays over time in sensory memory, but that cued recall is aided later on by transfer to a visual memory specialized for depth. This transfer is sufficiently rapid with a tonal cue to compensate for the sensory decay, but it is slowed by the need to tag the arrow cue's depth relative to the depths of the numerals, exposing a dip when sensation has decayed and transfer is not yet complete. A model with a fixed rate of sensory decay and varied transfer rates across individuals captures the dip as well as the cue modality effect.

  7. Eye Contact Is Crucial for Referential Communication in Pet Dogs.

    PubMed

    Savalli, Carine; Resende, Briseida; Gaunet, Florence

    2016-01-01

    Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.

  8. Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.

    PubMed

    Gobel, Matthias S; Tufft, Miles R A; Richardson, Daniel C

    2018-05-01

    We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  9. Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder

    PubMed Central

    Lundahl, Leslie H.; Greenwald, Mark K.

    2016-01-01

    Aims Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)–related cues in cannabis dependent volunteers. Methods 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. Results In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Conclusions Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. PMID:27436749

  10. Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder.

    PubMed

    Lundahl, Leslie H; Greenwald, Mark K

    2016-09-01

    Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)-related cues in cannabis dependent volunteers. 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Freezing of Gait in Parkinson's Disease: An Overload Problem?

    PubMed

    Beck, Eric N; Ehgoetz Martens, Kaylena A; Almeida, Quincy J

    2015-01-01

    Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson's disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers' gait was affected. However, when visual cues were present, freezers' gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG.

  12. Freezing of Gait in Parkinson’s Disease: An Overload Problem?

    PubMed Central

    Beck, Eric N.; Ehgoetz Martens, Kaylena A.; Almeida, Quincy J.

    2015-01-01

    Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson’s disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers’ gait was affected. However, when visual cues were present, freezers’ gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG. PMID:26678262

  13. Relevance of visual cues for orientation at familiar sites by homing pigeons: an experiment in a circular arena.

    PubMed Central

    Gagliardo, A.; Odetti, F.; Ioalè, P.

    2001-01-01

    Whether pigeons use visual landmarks for orientation from familiar locations has been a subject of debate. By recording the directional choices of both anosmic and control pigeons while exiting from a circular arena we were able to assess the relevance of olfactory and visual cues for orientation from familiar sites. When the birds could see the surroundings, both anosmic and control pigeons were homeward oriented. When the view of the landscape was prevented by screens that surrounded the arena, the control pigeons exited from the arena approximately in the home direction, while the anosmic pigeons' distribution was not different from random. Our data suggest that olfactory and visual cues play a critical, but interchangeable, role for orientation at familiar sites. PMID:11571054

  14. Alcohol-cue exposure effects on craving and attentional bias in underage college-student drinkers.

    PubMed

    Ramirez, Jason J; Monti, Peter M; Colwill, Ruth M

    2015-06-01

    The effect of alcohol-cue exposure on eliciting craving has been well documented, and numerous theoretical models assert that craving is a clinically significant construct central to the motivation and maintenance of alcohol-seeking behavior. Furthermore, some theories propose a relationship between craving and attention, such that cue-induced increases in craving bias attention toward alcohol cues, which, in turn, perpetuates craving. This study examined the extent to which alcohol cues induce craving and bias attention toward alcohol cues among underage college-student drinkers. We designed within-subject cue-reactivity and visual-probe tasks to assess in vivo alcohol-cue exposure effects on craving and attentional bias on 39 undergraduate college drinkers (ages 18-20). Participants expressed greater subjective craving to drink alcohol following in vivo cue exposure to a commonly consumed beer compared with water exposure. Furthermore, following alcohol-cue exposure, participants exhibited greater attentional biases toward alcohol cues as measured by a visual-probe task. In addition to the cue-exposure effects on craving and attentional bias, within-subject differences in craving across sessions marginally predicted within-subject differences in attentional bias. Implications for both theory and practice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  15. A bilateral advantage for maintaining objects in visual short term memory.

    PubMed

    Holt, Jessica L; Delvenne, Jean-François

    2015-01-01

    Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754-763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127-133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1s, 3s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Visual Cues Given by Humans Are Not Sufficient for Asian Elephants (Elephas maximus) to Find Hidden Food

    PubMed Central

    Plotnik, Joshua M.; Pokorny, Jennifer J.; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F.; Hennessy, Alice; Hill, James; Hill, Virginia J.; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L.; Morrison, Violet M. B.; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K.; Ylli, Dora; Clayton, Nicola S.; Roberts, John; Fure, Emilie L.; Duchatelier, Alicia P.; Getz, David

    2013-01-01

    Recent research suggests that domesticated species – due to artificial selection by humans for specific, preferred behavioral traits – are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants’ inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation. PMID:23613804

  17. Visual cues given by humans are not sufficient for Asian elephants (Elephas maximus) to find hidden food.

    PubMed

    Plotnik, Joshua M; Pokorny, Jennifer J; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F; Hennessy, Alice; Hill, James; Hill, Virginia J; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L; Morrison, Violet M B; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K; Ylli, Dora; Clayton, Nicola S; Roberts, John; Fure, Emilie L; Duchatelier, Alicia P; Getz, David

    2013-01-01

    Recent research suggests that domesticated species--due to artificial selection by humans for specific, preferred behavioral traits--are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants' inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation.

  18. Dynamics of the spatial scale of visual attention revealed by brain event-related potentials

    NASA Technical Reports Server (NTRS)

    Luo, Y. J.; Greenwood, P. M.; Parasuraman, R.

    2001-01-01

    The temporal dynamics of the spatial scaling of attention during visual search were examined by recording event-related potentials (ERPs). A total of 16 young participants performed a search task in which the search array was preceded by valid cues that varied in size and hence in precision of target localization. The effects of cue size on short-latency (P1 and N1) ERP components, and the time course of these effects with variation in cue-target stimulus onset asynchrony (SOA), were examined. Reaction time (RT) to discriminate a target was prolonged as cue size increased. The amplitudes of the posterior P1 and N1 components of the ERP evoked by the search array were affected in opposite ways by the size of the precue: P1 amplitude increased whereas N1 amplitude decreased as cue size increased, particularly following the shortest SOA. The results show that when top-down information about the region to be searched is less precise (larger cues), RT is slowed and the neural generators of P1 become more active, reflecting the additional computations required in changing the spatial scale of attention to the appropriate element size to facilitate target discrimination. In contrast, the decrease in N1 amplitude with cue size may reflect a broadening of the spatial gradient of attention. The results provide electrophysiological evidence that changes in the spatial scale of attention modulate neural activity in early visual cortical areas and activate at least two temporally overlapping component processes during visual search.

  19. Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?

    PubMed

    Carman, Heidi M; Mactutus, Charles F

    2002-09-01

    Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.

  20. Early and Late Inhibitions Elicited by a Peripheral Visual Cue on Manual Response to a Visual Target: Are They Based on Cartesian Coordinates?

    ERIC Educational Resources Information Center

    Gawryszewski, Luiz G.; Carreiro, Luiz Renato R.; Magalhaes, Fabio V.

    2005-01-01

    A non-informative cue (C) elicits an inhibition of manual reaction time (MRT) to a visual target (T). We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7[degrees]) positions, the C-T angular distances (in polar…

  1. Visual attention to food cues in obesity: an eye-tracking study.

    PubMed

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  2. Nonlinear Y-Like Receptive Fields in the Early Visual Cortex: An Intermediate Stage for Building Cue-Invariant Receptive Fields from Subcortical Y Cells.

    PubMed

    Gharat, Amol; Baker, Curtis L

    2017-01-25

    Many of the neurons in early visual cortex are selective for the orientation of boundaries defined by first-order cues (luminance) as well as second-order cues (contrast, texture). The neural circuit mechanism underlying this selectivity is still unclear, but some studies have proposed that it emerges from spatial nonlinearities of subcortical Y cells. To understand how inputs from the Y-cell pathway might be pooled to generate cue-invariant receptive fields, we recorded visual responses from single neurons in cat Area 18 using linear multielectrode arrays. We measured responses to drifting and contrast-reversing luminance gratings as well as contrast modulation gratings. We found that a large fraction of these neurons have nonoriented responses to gratings, similar to those of subcortical Y cells: they respond at the second harmonic (F2) to high-spatial frequency contrast-reversing gratings and at the first harmonic (F1) to low-spatial frequency drifting gratings ("Y-cell signature"). For a given neuron, spatial frequency tuning for linear (F1) and nonlinear (F2) responses is quite distinct, similar to orientation-selective cue-invariant neurons. Also, these neurons respond to contrast modulation gratings with selectivity for the carrier (texture) spatial frequency and, in some cases, orientation. Their receptive field properties suggest that they could serve as building blocks for orientation-selective cue-invariant neurons. We propose a circuit model that combines ON- and OFF-center cortical Y-like cells in an unbalanced push-pull manner to generate orientation-selective, cue-invariant receptive fields. A significant fraction of neurons in early visual cortex have specialized receptive fields that allow them to selectively respond to the orientation of boundaries that are invariant to the cue (luminance, contrast, texture, motion) that defines them. However, the neural mechanism to construct such versatile receptive fields remains unclear. Using multielectrode recording, we found a large fraction of neurons in early visual cortex with receptive fields not selective for orientation that have spatial nonlinearities like those of subcortical Y cells. These are strong candidates for building cue-invariant orientation-selective neurons; we present a neural circuit model that pools such neurons in an imbalanced "push-pull" manner, to generate orientation-selective cue-invariant receptive fields. Copyright © 2017 the authors 0270-6474/17/370998-16$15.00/0.

  3. Training specificity and transfer in time and distance estimation.

    PubMed

    Healy, Alice F; Tack, Lindsay Anderson; Schneider, Vivian I; Barshi, Immanuel

    2015-07-01

    Learning is often specific to the conditions of training, making it important to identify which aspects of the testing environment are crucial to be matched in the training environment. In the present study, we examined training specificity in time and distance estimation tasks that differed only in the focus of processing (FOP). External spatial cues were provided for the distance estimation task and for the time estimation task in one condition, but not in another. The presence of a concurrent alphabet secondary task was manipulated during training and testing in all estimation conditions in Experiment 1. For distance as well as for time estimation in both conditions, training of the primary estimation task was found to be specific to the presence of the secondary task. In Experiments 2 and 3, we examined transfer between one estimation task and another, with no secondary task in either case. When all conditions were equal aside from the FOP instructions, including the presence of external spatial cues, Experiment 2 showed "transfer" between tasks, suggesting that training might not be specific to the FOP. When the external spatial cues were removed from the time estimation task, Experiment 3 showed no transfer between time and distance estimations, suggesting that external task cues influenced the procedures used in the estimation tasks.

  4. Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech

    ERIC Educational Resources Information Center

    Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc

    2010-01-01

    In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…

  5. Lateralization of Frequency-Specific Networks for Covert Spatial Attention to Auditory Stimuli

    PubMed Central

    Thorpe, Samuel; D'Zmura, Michael

    2011-01-01

    We conducted a cued spatial attention experiment to investigate the time–frequency structure of human EEG induced by attentional orientation of an observer in external auditory space. Seven subjects participated in a task in which attention was cued to one of two spatial locations at left and right. Subjects were instructed to report the speech stimulus at the cued location and to ignore a simultaneous speech stream originating from the uncued location. EEG was recorded from the onset of the directional cue through the offset of the inter-stimulus interval (ISI), during which attention was directed toward the cued location. Using a wavelet spectrum, each frequency band was then normalized by the mean level of power observed in the early part of the cue interval to obtain a measure of induced power related to the deployment of attention. Topographies of band specific induced power during the cue and inter-stimulus intervals showed peaks over symmetric bilateral scalp areas. We used a bootstrap analysis of a lateralization measure defined for symmetric groups of channels in each band to identify specific lateralization events throughout the ISI. Our results suggest that the deployment and maintenance of spatially oriented attention throughout a period of 1,100 ms is marked by distinct episodes of reliable hemispheric lateralization ipsilateral to the direction in which attention is oriented. An early theta lateralization was evident over posterior parietal electrodes and was sustained throughout the ISI. In the alpha and mu bands punctuated episodes of parietal power lateralization were observed roughly 500 ms after attentional deployment, consistent with previous studies of visual attention. In the beta band these episodes show similar patterns of lateralization over frontal motor areas. These results indicate that spatial attention involves similar mechanisms in the auditory and visual modalities. PMID:21630112

  6. Visual cues that are effective for contextual saccade adaptation.

    PubMed

    Azadi, Reza; Harwood, Mark R

    2014-06-01

    The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.

  7. Motion cue effects on human pilot dynamics in manual control

    NASA Technical Reports Server (NTRS)

    Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.

    1977-01-01

    Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.

  8. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  9. A Healthful Balance

    ERIC Educational Resources Information Center

    Hernandez, Patricia; Jones, Sheila

    2014-01-01

    By now, we are all aware of the effect of super-sized food portions. Very young children regulate their food intake by internal cues (when they feel full) rather than by portion size. As children age, external cues have more influence than internal cues. Hence, larger portion sizes promote more energy intake in older children, leading to caloric…

  10. Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory.

    PubMed

    Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk

    2015-01-01

    Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.

  11. Feasibility and Preliminary Efficacy of Visual Cue Training to Improve Adaptability of Walking after Stroke: Multi-Centre, Single-Blind Randomised Control Pilot Trial.

    PubMed

    Hollands, Kristen L; Pelton, Trudy A; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M; Wing, Alan M; Tyson, Sarah F; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M

    2015-01-01

    Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services. Community dwelling stroke survivors with walking speed <0.8m/s, lower limb paresis and no severe visual impairments. Over-ground visual cue training (O-VCT), Treadmill based visual cue training (T-VCT), and Usual care (UC) delivered by physiotherapists twice weekly for 8 weeks. Participants were randomised using computer generated random permutated balanced blocks of randomly varying size. Recruitment, retention, adherence, adverse events and mobility and balance were measured before randomisation, post-intervention and at four weeks follow-up. Fifty-six participants participated (18 T-VCT, 19 O-VCT, 19 UC). Thirty-four completed treatment and follow-up assessments. Of the participants that completed, adherence was good with 16 treatments provided over (median of) 8.4, 7.5 and 9 weeks for T-VCT, O-VCT and UC respectively. No adverse events were reported. Post-treatment improvements in walking speed, symmetry, balance and functional mobility were seen in all treatment arms. Outpatient based treadmill and over-ground walking adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Clinicaltrials.gov NCT01600391.

  12. Simulator Study of Helmet-Mounted Symbology System Concepts in Degraded Visual Environments.

    PubMed

    Cheung, Bob; McKinley, Richard A; Steels, Brad; Sceviour, Robert; Cosman, Vaughn; Holst, Peter

    2015-07-01

    A sudden loss of external visual cues during critical phases of flight results in spatial disorientation. This is due to undetected horizontal and vertical drift when there is little tolerance for error and correction delay as the helicopter is close to the ground. Three helmet-mounted symbology system concepts were investigated in the simulator as potential solutions for the legacy Griffon helicopters. Thirteen Royal Canadian Air Force (RCAF) Griffon pilots were exposed to the Helmet Display Tracking System for Degraded Visual Environments (HDTS), the BrownOut Symbology System (BOSS), and the current RCAF AVS7 symbology system. For each symbology system, the pilot performed a two-stage departure and a single-stage approach. The presentation order of the symbology systems was randomized. Objective performance metrics included aircraft speed, altitude, attitude, and distance from the landing point. Subjective measurements included situation awareness, mental effort, perceived performance, perceptual cue rating, and NASA Task Load Index. Repeated measures analysis of variance and subsequent planned comparison for all the objective and subjective measurements were performed between the AVS7, HDTS, and BOSS. Our results demonstrated that HDTS and BOSS showed general improvement over AVS7 in two-stage departure. However, only HDTS performed significantly better in heading error than AVS7. During the single-stage approach, BOSS performed worse than AVS7 in heading root mean square error, and only HDTS performed significantly better in distance to landing point and approach heading than the others. Both the HDTS and BOSS possess their own limitations; however, HDTS is the pilots' preferred flight display.

  13. Changes in the distribution of sustained attention alter the perceived structure of visual space.

    PubMed

    Fortenbaugh, Francesca C; Robertson, Lynn C; Esterman, Michael

    2017-02-01

    Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field. Published by Elsevier Ltd.

  14. Forgotten but not gone: Retro-cue costs and benefits in a double-cueing paradigm suggest multiple states in visual short-term memory.

    PubMed

    van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G

    2015-11-01

    Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).

  15. Heuristics of reasoning and analogy in children's visual perspective taking.

    PubMed

    Yaniv, I; Shatz, M

    1990-10-01

    We propose that children's reasoning about others' visual perspectives is guided by simple heuristics based on a perceiver's line of sight and salient features of the object met by that line. In 3 experiments employing a 2-perceiver analogy task, children aged 3-6 were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight sufficed to distinguish it from alternatives. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed on the objects' sides facilitated solution of the symmetrical orientations. These and several other related findings reported in the literature are traced to children's reliance on heuristics of reasoning.

  16. Inhibition of return shortens perceived duration of a brief visual event.

    PubMed

    Osugi, Takayuki; Takeda, Yuji; Murakami, Ikuya

    2016-11-01

    We investigated the influence of attentional inhibition on the perceived duration of a brief visual event. Although attentional capture by an exogenous cue is known to prolong the perceived duration of an attended visual event, it remains unclear whether time perception is also affected by subsequent attentional inhibition at the location previously cued by an exogenous cue, an attentional phenomenon known as inhibition of return. In this study, we combined spatial cuing and duration judgment. After one second from the appearance of an uninformative peripheral cue either to the left or to the right, a target appeared at a cued side in one-third of the trials, which indeed yielded inhibition of return, and at the opposite side in another one-third of the trials. In the remaining trials, a cue appeared at a central box and one second later, a target appeared at either the left or right side. The target at the previously cued location was perceived to last shorter than the target presented at the opposite location, and shorter than the target presented after the central cue presentation. Therefore, attentional inhibition produced by a classical paradigm of inhibition of return decreased the perceived duration of a brief visual event. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Audiovisual speech perception in infancy: The influence of vowel identity and infants' productive abilities on sensitivity to (mis)matches between auditory and visual speech cues.

    PubMed

    Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias

    2016-02-01

    Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  18. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  19. Multimodal cuing of autobiographical memory in semantic dementia.

    PubMed

    Greenberg, Daniel L; Ogar, Jennifer M; Viskontas, Indre V; Gorno Tempini, Maria Luisa; Miller, Bruce; Knowlton, Barbara J

    2011-01-01

    Individuals with semantic dementia (SD) have impaired autobiographical memory (AM), but the extent of the impairment has been controversial. According to one report (Westmacott, Leach, Freedman, & Moscovitch, 2001), patient performance was better when visual cues were used instead of verbal cues; however, the visual cues used in that study (family photographs) provided more retrieval support than do the word cues that are typically used in AM studies. In the present study, we sought to disentangle the effects of retrieval support and cue modality. We cued AMs of 5 patients with SD and 5 controls with words, simple pictures, and odors. Memories were elicited from childhood, early adulthood, and recent adulthood; they were scored for level of detail and episodic specificity. The patients were impaired across all time periods and stimulus modalities. Within the patient group, words and pictures were equally effective as cues (Friedman test; χ² = 0.25, p = .61), whereas odors were less effective than both words and pictures (for words vs. odors, χ² = 7.83, p = .005; for pictures vs. odors, χ² = 6.18, p = .01). There was no evidence of a temporal gradient in either group (for patients with SD, χ² = 0.24, p = .89; for controls, χ² < 2.07, p = .35). Once the effect of retrieval support is equated across stimulus modalities, there is no evidence for an advantage of visual cues over verbal cues. The greater impairment for olfactory cues presumably reflects degeneration of anterior temporal regions that support olfactory memory. (c) 2010 APA, all rights reserved.

  20. Different effects of color-based and location-based selection on visual working memory.

    PubMed

    Li, Qi; Saiki, Jun

    2015-02-01

    In the present study, we investigated how feature- and location-based selection influences visual working memory (VWM) encoding and maintenance. In Experiment 1, cue type (color, location) and cue timing (precue, retro-cue) were manipulated in a change detection task. The stimuli were color-location conjunction objects, and binding memory was tested. We found a significantly greater effect for color precues than for either color retro-cues or location precues, but no difference between location pre- and retro-cues, consistent with previous studies (e.g., Griffin & Nobre in Journal of Cognitive Neuroscience, 15, 1176-1194, 2003). We also found no difference between location and color retro-cues. Experiment 2 replicated the color precue advantage with more complex color-shape-location conjunction objects. Only one retro-cue effect was different from that in Experiment 1: Color retro-cues were significantly less effective than location retro-cues in Experiment 2, which may relate to a structural property of multidimensional VWM representations. In Experiment 3, a visual search task was used, and the result of a greater location than color precue effect suggests that the color precue advantage in a memory task is related to the modulation of VWM encoding rather than of sensation and perception. Experiment 4, using a task that required only memory for individual features but not for feature bindings, further confirmed that the color precue advantage is specific to binding memory. Together, these findings reveal new aspects of the interaction between attention and VWM and provide potentially important implications for the structural properties of VWM representations.

  1. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    PubMed

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  2. A designated odor-language integration system in the human brain.

    PubMed

    Olofsson, Jonas K; Hurley, Robert S; Bowman, Nicholas E; Bao, Xiaojun; Mesulam, M-Marsel; Gottfried, Jay A

    2014-11-05

    Odors are surprisingly difficult to name, but the mechanism underlying this phenomenon is poorly understood. In experiments using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the physiological basis of odor naming with a paradigm where olfactory and visual object cues were followed by target words that either matched or mismatched the cue. We hypothesized that word processing would not only be affected by its semantic congruency with the preceding cue, but would also depend on the cue modality (olfactory or visual). Performance was slower and less precise when linking a word to its corresponding odor than to its picture. The ERP index of semantic incongruity (N400), reflected in the comparison of nonmatching versus matching target words, was more constrained to posterior electrode sites and lasted longer on odor-cue (vs picture-cue) trials. In parallel, fMRI cross-adaptation in the right orbitofrontal cortex (OFC) and the left anterior temporal lobe (ATL) was observed in response to words when preceded by matching olfactory cues, but not by matching visual cues. Time-series plots demonstrated increased fMRI activity in OFC and ATL at the onset of the odor cue itself, followed by response habituation after processing of a matching (vs nonmatching) target word, suggesting that predictive perceptual representations in these regions are already established before delivery and deliberation of the target word. Together, our findings underscore the modality-specific anatomy and physiology of object identification in the human brain. Copyright © 2014 the authors 0270-6474/14/3414864-10$15.00/0.

  3. "Tunnel Vision": A Possible Keystone Stimulus Control Deficit in Autistic Children.

    ERIC Educational Resources Information Center

    Rincover, Arnold; And Others

    1986-01-01

    Three autistic boys (ages 9-13) were trained to select a card containing a stimulus array comprised of three visual cues. Decreased distance between cues resulted in responses to more cues, increased distance to fewer cues. Distances did not affect the responding of children matched for mental and chronological age. (Author/JW)

  4. Cue competition affects temporal dynamics of edge-assignment in human visual cortex.

    PubMed

    Brooks, Joseph L; Palmer, Stephen E

    2011-03-01

    Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.

  5. Late development of cue integration is linked to sensory fusion in cortex.

    PubMed

    Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko

    2015-11-02

    Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex

    PubMed Central

    Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko

    2015-01-01

    Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841

  7. Visual and Olfactory Floral Cues of Campanula (Campanulaceae) and Their Significance for Host Recognition by an Oligolectic Bee Pollinator

    PubMed Central

    Milet-Pinheiro, Paulo; Ayasse, Manfred; Dötterl, Stefan

    2015-01-01

    Oligolectic bees collect pollen from a few plants within a genus or family to rear their offspring, and are known to rely on visual and olfactory floral cues to recognize host plants. However, studies investigating whether oligolectic bees recognize distinct host plants by using shared floral cues are scarce. In the present study, we investigated in a comparative approach the visual and olfactory floral cues of six Campanula species, of which only Campanula lactiflora has never been reported as a pollen source of the oligolectic bee Ch. rapunculi. We hypothesized that the flowers of Campanula species visited by Ch. rapunculi share visual (i.e. color) and/or olfactory cues (scents) that give them a host-specific signature. To test this hypothesis, floral color and scent were studied by spectrophotometric and chemical analyses, respectively. Additionally, we performed bioassays within a flight cage to test the innate color preference of Ch. rapunculi. Our results show that Campanula flowers reflect the light predominantly in the UV-blue/blue bee-color space and that Ch. rapunculi displays a strong innate preference for these two colors. Furthermore, we recorded spiroacetals in the floral scent of all Campanula species, but Ca. lactiflora. Spiroacetals, rarely found as floral scent constituents but quite common among Campanula species, were recently shown to play a key function for host-flower recognition by Ch. rapunculi. We conclude that Campanula species share some visual and olfactory floral cues, and that neurological adaptations (i.e. vision and olfaction) of Ch. rapunculi innately drive their foraging flights toward host flowers. The significance of our findings for the evolution of pollen diet breadth in bees is discussed. PMID:26060994

  8. Multimedia instructions and cognitive load theory: effects of modality and cueing.

    PubMed

    Tabbers, Huib K; Martens, Rob L; van Merriënboer, Jeroen J G

    2004-03-01

    Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. The participants were 111 second-year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). The participants studied a web-based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self-report measures of mental effort were administered. Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non-laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner-paced, as opposed to the system-paced instructions used in earlier research.

  9. Differential effect of glucose ingestion on the neural processing of food stimuli in lean and overweight adults.

    PubMed

    Heni, Martin; Kullmann, Stephanie; Ketterer, Caroline; Guthoff, Martina; Bayer, Margarete; Staiger, Harald; Machicao, Fausto; Häring, Hans-Ulrich; Preissl, Hubert; Veit, Ralf; Fritsche, Andreas

    2014-03-01

    Eating behavior is crucial in the development of obesity and Type 2 diabetes. To further investigate its regulation, we studied the effects of glucose versus water ingestion on the neural processing of visual high and low caloric food cues in 12 lean and 12 overweight subjects by functional magnetic resonance imaging. We found body weight to substantially impact the brain's response to visual food cues after glucose versus water ingestion. Specifically, there was a significant interaction between body weight, condition (water versus glucose), and caloric content of food cues. Although overweight subjects showed a generalized reduced response to food objects in the fusiform gyrus and precuneus, the lean group showed a differential pattern to high versus low caloric foods depending on glucose versus water ingestion. Furthermore, we observed plasma insulin and glucose associated effects. The hypothalamic response to high caloric food cues negatively correlated with changes in blood glucose 30 min after glucose ingestion, while especially brain regions in the prefrontal cortex showed a significant negative relationship with increases in plasma insulin 120 min after glucose ingestion. We conclude that the postprandial neural processing of food cues is highly influenced by body weight especially in visual areas, potentially altering visual attention to food. Furthermore, our results underline that insulin markedly influences prefrontal activity to high caloric food cues after a meal, indicating that postprandial hormones may be potential players in modulating executive control. Copyright © 2013 Wiley Periodicals, Inc.

  10. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    ERIC Educational Resources Information Center

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  11. Wingbeat frequency-sweep and visual stimuli for trapping male Aedes aegypti (Diptera: Culicidae)

    USDA-ARS?s Scientific Manuscript database

    Combinations of female wingbeat acoustic cues and visual cues were evaluated to determine their potential for use in male Aedes aegypti (L.) traps in peridomestic environments. A modified Centers for Disease control (CDC) light trap using a 350-500 Hz frequency-sweep broadcast from a speaker as an a...

  12. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    USDA-ARS?s Scientific Manuscript database

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  13. Visual Cues Generated during Action Facilitate 14-Month-Old Infants' Mental Rotation

    ERIC Educational Resources Information Center

    Antrilli, Nick K.; Wang, Su-hua

    2016-01-01

    Although action experience has been shown to enhance the development of spatial cognition, the mechanism underlying the effects of action is still unclear. The present research examined the role of visual cues generated during action in promoting infants' mental rotation. We sought to clarify the underlying mechanism by decoupling different…

  14. Categorically Defined Targets Trigger Spatiotemporal Visual Attention

    ERIC Educational Resources Information Center

    Wyble, Brad; Bowman, Howard; Potter, Mary C.

    2009-01-01

    Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…

  15. Visual Cues and Listening Effort: Individual Variability

    ERIC Educational Resources Information Center

    Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.

    2011-01-01

    Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…

  16. Visual Sonority Modulates Infants' Attraction to Sign Language

    ERIC Educational Resources Information Center

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  17. Impaired Visual Attention in Children with Dyslexia.

    ERIC Educational Resources Information Center

    Heiervang, Einar; Hugdahl, Kenneth

    2003-01-01

    A cue-target visual attention task was administered to 25 children (ages 10-12) with dyslexia. Results showed a general pattern of slower responses in the children with dyslexia compared to controls. Subjects also had longer reaction times in the short and long cue-target interval conditions (covert and overt shift of attention). (Contains…

  18. Visual Cues, Student Sex, Material Taught, and the Magnitude of Teacher Expectancy Effects.

    ERIC Educational Resources Information Center

    Badini, Aldo A.; Rosenthal, Robert

    1989-01-01

    Conducts an experiment on teacher expectancy effects to investigate the simultaneous effects of student gender, communication channel, and type of material taught (vocabulary and reasoning). Finds that the magnitude of teacher expectation effects was greater when students had access to visual cues, especially when the students were female. (MS)

  19. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    ERIC Educational Resources Information Center

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  20. Speaker Identity Supports Phonetic Category Learning

    ERIC Educational Resources Information Center

    Mani, Nivedita; Schneider, Signe

    2013-01-01

    Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…

  1. Audio-Visual Speech Perception: A Developmental ERP Investigation

    ERIC Educational Resources Information Center

    Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…

  2. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  3. Enhancing Visual Search Abilities of People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…

  4. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    PubMed

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    PubMed

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  6. Subjective scaling of spatial room acoustic parameters influenced by visual environmental cues

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas

    2010-01-01

    Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music∕speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D∕R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment. PMID:20968367

  7. Discourse intervention strategies in Alzheimer's disease: Eye-tracking and the effect of visual cues in conversation.

    PubMed

    Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth

    2014-01-01

    The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends.

  8. Visually-guided attention enhances target identification in a complex auditory scene.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G

    2007-06-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.

  9. Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

    PubMed Central

    Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.

    2007-01-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308

  10. Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.

    PubMed

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-09-20

    Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.

  11. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    PubMed

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of events. Copyright © 2017 the American Physiological Society.

  12. Interaction of color and geometric cues in depth perception: when does "red" mean "near"?

    PubMed

    Guibal, Christophe R C; Dresp, Birgitta

    2004-12-01

    Luminance and color are strong and self-sufficient cues to pictorial depth in visual scenes and images. The present study investigates the conditions under which luminance or color either strengthens or overrides geometric depth cues. We investigated how luminance contrast associated with the color red and color contrast interact with relative height in the visual field, partial occlusion, and interposition to determine the probability that a given figure presented in a pair is perceived as "nearer" than the other. Latencies of "near" responses were analyzed to test for effects of attentional selection. Figures in a pair were supported by luminance contrast (Experiment 1) or isoluminant color contrast (Experiment 2) and combined with one of the three geometric cues. The results of Experiment 1 show that the luminance contrast of a color (here red), when it does not interact with other colors, produces the same effects as achromatic luminance contrasts. The probability of "near" increases with the luminance contrast of the color stimulus, the latencies for "near" responses decrease with increasing luminance contrast. Partial occlusion is found to be a strong enough pictorial cue to support a weaker red luminance contrast. Interposition cues lose out against cues of spatial position and partial occlusion. The results of Experiment 2, with isoluminant displays of varying color contrast, reveal that red color contrast on a light background supported by any of the three geometric cues wins over green or white supported by any of the three geometric cues. On a dark background, red color contrast supported by the interposition cue loses out against green or white color contrast supported by partial occlusion. These findings reveal that color is not an independent depth cue, but is strongly influenced by luminance contrast and stimulus geometry. Systematically shorter response latencies for stronger "near" percepts demonstrate that selective visual attention reliably detects the most likely depth cue combination in a given configuration.

  13. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  14. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    PubMed

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  15. Gait parameter control timing with dynamic manual contact or visual cues

    PubMed Central

    Shi, Peter; Werner, William

    2016-01-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. PMID:26936979

  16. Individual differences in the ability to identify, select and use appropriate frames of reference for perceptuo-motor control.

    PubMed

    Isableu, B; Ohlmann, T; Cremieux, J; Vuillerme, N; Amblard, B; Gresty, M A

    2010-09-01

    The causes of the interindividual differences (IDs) in how we perceive and control spatial orientation are poorly understood. Here, we propose that IDs partly reflect preferred modes of spatial referencing and that these preferences or "styles" are maintained from the level of spatial perception to that of motor control. Two groups of experimental subjects, one with high visual field dependency (FD) and one with marked visual field independency (FI) were identified by the Rod and Frame Test, which identifies relative dependency on a visual frame of reference (VFoR). FD and FI subjects were tasked with standing still in conditions of increasing postural difficulty while visual cues of self-orientation (a visual frame tilted in roll) and self-motion (in stroboscopic illumination) were varied and in darkness to assess visual dependency. Postural stability, overall body orientation and modes of segmental stabilization relative to either external (space) or egocentric (adjacent segments) frames of reference in the roll plane were analysed. We hypothesized that a moderate challenge to balance should enhance subjects' reliance on VFoR, particularly in FD subjects, whereas a substantial challenge should constrain subjects to use a somatic-vestibular based FoR to prevent falling in which case IDs would vanish. The results showed that with increasing difficulty, FD subjects became more unstable and more disoriented shown by larger effects of the tilted visual frame on posture. Furthermore, their preference to coalign body/VFoR coordinate systems lead to greater fixation of the head-trunk articulation and stabilization of the hip in space, whereas the head and trunk remained more stabilized in space with the hip fixed on the leg in FI subjects. These results show that FD subjects have difficulties at identifying and/or adopting a more appropriate FoR based on proprioceptive and vestibular cues to regulate the coalignment of posturo/exocentric FoRs. The FI subjects' resistance in the face of altered VFoR and balance challenge resides in their greater ability to coordinate movement by coaligning body axes with more appropriate FoRs (provided by proprioceptive and vestibular co-variance). Copyright (c) 2010 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Externalizing psychopathology and gain-loss feedback in a simulated gambling task: dissociable components of brain response revealed by time-frequency analysis.

    PubMed

    Bernat, Edward M; Nelson, Lindsay D; Steele, Vaughn R; Gehring, William J; Patrick, Christopher J

    2011-05-01

    Externalizing is a broad construct that reflects propensity toward a variety of impulse control problems, including antisocial personality disorder and substance use disorders. Two event-related potential responses known to be reduced among individuals high in externalizing proneness are the P300, which reflects postperceptual processing of a stimulus, and the error-related negativity (ERN), which indexes performance monitoring based on endogenous representations. In the current study, the authors used a simulated gambling task to examine the relation between externalizing proneness and the feedback-related negativity (FRN), a brain response that indexes performance monitoring related to exogenous cues, which is thought to be highly related to the ERN. Time-frequency (TF) analysis was used to disentangle the FRN from the accompanying P300 response to feedback cues by parsing the overall feedback-locked potential into distinctive theta (4-7 Hz) and delta (<3 Hz) TF components. Whereas delta-P300 amplitude was reduced among individuals high in externalizing proneness, theta-FRN response was unrelated to externalizing. These findings suggest that in contrast with previously reported deficits in endogenously based performance monitoring (as indexed by the ERN), individuals prone to externalizing problems show intact monitoring of exogenous cues (as indexed by the FRN). The results also contribute to a growing body of evidence indicating that the P300 is attenuated across a broad range of task conditions in high-externalizing individuals.

  18. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  19. Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality

    NASA Astrophysics Data System (ADS)

    Hua, Hong

    2017-05-01

    Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).

  20. Do preschool children learn to read words from environmental prints?

    PubMed

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.

  1. Do Preschool Children Learn to Read Words from Environmental Prints?

    PubMed Central

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677

  2. Starvation period and age affect the response of female Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae) to odor and visual cues.

    PubMed

    Davidson, Melanie M; Butler, Ruth C; Teulon, David A J

    2006-07-01

    The effects of starvation or age on the walking or flying response of female Frankliniella occidentalis to visual and/or odor cues in two types of olfactometer were examined in the laboratory. The response of walking thrips starved for 0, 1, 4, or 24h to an odor cue (1microl of 10% p-anisaldehyde) was examined in a Y-tube olfactometer. The take-off and landing response of thrips (unknown age) starved for 0, 1, 4, 24, 48 or 72h, or of thrips of different ages (2-3 days or 10-13 days post-adult emergence) starved for 24h, to a visual cue (98 cm(2) yellow sticky trap) and/or an odor cue (0.5 or 1.0 ml p-anisaldehyde) was examined in a wind tunnel. More thrips walked up the odor-laden arm in the Y-tube when starved for at least 4h (76%) than satiated thrips (58.7%) or those starved for 1h (62.7%, P<0.05). In the wind tunnel experiments the percentage of thrips to fly or land on the sticky trap increased between satiated thrips (7.3% to fly, 3.3% on trap) and those starved for 4h (81.2% to fly, 29% on trap) and decreased between thrips starved for 48 (74.5% to fly, 23% on trap) and 72 h (56.5% to fly, 15.5% on trap, P<0.05). Fewer younger thrips (38.8%) landed on a sticky trap containing a yellow visual cue of, those that flew, than older thrips (70.4%, P<0.05), although a similar percentage of thrips flew regardless of age or type of cue present in the wind tunnel (average 44%, P>0.05).

  3. The Role of Visual Cues in Microgravity Spatial Orientation

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.

  4. Whether or not to eat: A controlled laboratory study of discriminative cueing effects on food intake in humans.

    PubMed

    Ridley-Siegert, Thomas L; Crombag, Hans S; Yeomans, Martin R

    2015-12-01

    There is a wealth of data showing a large impact of food cues on human ingestion, yet most studies use pictures of food where the precise nature of the associations between the cue and food is unclear. To test whether novel cues which were associated with the opportunity of winning access to food images could also impact ingestion, 63 participants participated in a game in which novel visual cues signalled whether responding on a keyboard would win (a picture of) chocolate, crisps, or nothing. Thirty minutes later, participants were given an ad libitum snack-intake test during which the chocolate-paired cue, the crisp-paired cue, the non-winning cue and no cue were presented as labels on the food containers. The presence of these cues significantly altered overall intake of the snack foods; participants presented with food labelled with the cue that had been associated with winning chocolate ate significantly more than participants who had been given the same products labelled with the cue associated with winning nothing, and in the presence of the cue signalling the absence of food reward participants tended to eat less than all other conditions. Surprisingly, cue-dependent changes in food consumption were unaffected by participants' level of contingency awareness. These results suggest that visual cues that have been pre-associated with winning, but not consuming, a liked food reward modify food intake consistent with current ideas that the abundance of food associated cues may be one factor underlying the 'obesogenic environment'. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. An artificial neural network architecture for non-parametric visual odometry in wireless capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Dimas, George; Iakovidis, Dimitris K.; Karargyris, Alexandros; Ciuti, Gastone; Koulaouzidis, Anastasios

    2017-09-01

    Wireless capsule endoscopy is a non-invasive screening procedure of the gastrointestinal (GI) tract performed with an ingestible capsule endoscope (CE) of the size of a large vitamin pill. Such endoscopes are equipped with a usually low-frame-rate color camera which enables the visualization of the GI lumen and the detection of pathologies. The localization of the commercially available CEs is performed in the 3D abdominal space using radio-frequency (RF) triangulation from external sensor arrays, in combination with transit time estimation. State-of-the-art approaches, such as magnetic localization, which have been experimentally proved more accurate than the RF approach, are still at an early stage. Recently, we have demonstrated that CE localization is feasible using solely visual cues and geometric models. However, such approaches depend on camera parameters, many of which are unknown. In this paper the authors propose a novel non-parametric visual odometry (VO) approach to CE localization based on a feed-forward neural network architecture. The effectiveness of this approach in comparison to state-of-the-art geometric VO approaches is validated using a robotic-assisted in vitro experimental setup.

  6. Rethinking human visual attention: spatial cueing effects and optimality of decisions by honeybees, monkeys and humans.

    PubMed

    Eckstein, Miguel P; Mack, Stephen C; Liston, Dorion B; Bogush, Lisa; Menzel, Randolf; Krauzlis, Richard J

    2013-06-07

    Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. An Investigation of Visual, Aural, Motion and Control Movement Cues.

    ERIC Educational Resources Information Center

    Matheny, W. G.; And Others

    A study was conducted to determine the ways in which multi-sensory cues can be simulated and effectively used in the training of pilots. Two analytical bases, one called the stimulus environment approach and the other an information array approach, are developed along with a cue taxonomy. Cues are postulated on the basis of information gained from…

  8. A novel experimental method for measuring vergence and accommodation responses to the main near visual cues in typical and atypical groups.

    PubMed

    Horwood, Anna M; Riddell, Patricia M

    2009-01-01

    Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.

  9. A Novel Experimental Method for Measuring Vergence and Accommodation Responses to the Main Near Visual Cues in Typical and Atypical Groups

    PubMed Central

    Horwood, Anna M; Riddell, Patricia M

    2015-01-01

    Binocular disparity, blur and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3m and 2m. By separating the three main near cues we can explore their relative weighting in three, two, one and zero cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable inter-participant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development and emmetropisation. PMID:19301186

  10. Research on integration of visual and motion cues for flight simulation and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Curry, R. E.

    1977-01-01

    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.

  11. The role of visual and mechanosensory cues in structuring forward flight in Drosophila melanogaster.

    PubMed

    Budick, Seth A; Reiser, Michael B; Dickinson, Michael H

    2007-12-01

    It has long been known that many flying insects use visual cues to orient with respect to the wind and to control their groundspeed in the face of varying wind conditions. Much less explored has been the role of mechanosensory cues in orienting insects relative to the ambient air. Here we show that Drosophila melanogaster, magnetically tethered so as to be able to rotate about their yaw axis, are able to detect and orient into a wind, as would be experienced during forward flight. Further, this behavior is velocity dependent and is likely subserved, at least in part, by the Johnston's organs, chordotonal organs in the antennae also involved in near-field sound detection. These wind-mediated responses may help to explain how flies are able to fly forward despite visual responses that might otherwise inhibit this behavior. Expanding visual stimuli, such as are encountered during forward flight, are the most potent aversive visual cues known for D. melanogaster flying in a tethered paradigm. Accordingly, tethered flies strongly orient towards a focus of contraction, a problematic situation for any animal attempting to fly forward. We show in this study that wind stimuli, transduced via mechanosensory means, can compensate for the aversion to visual expansion and thus may help to explain how these animals are indeed able to maintain forward flight.

  12. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    PubMed

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Situational cues and momentary food environment predict everyday eating behavior in adults with overweight and obesity.

    PubMed

    Elliston, Katherine G; Ferguson, Stuart G; Schüz, Natalie; Schüz, Benjamin

    2017-04-01

    Individual eating behavior is a risk factor for obesity and highly dependent on internal and external cues. Many studies also suggest that the food environment (i.e., food outlets) influences eating behavior. This study therefore examines the momentary food environment (at the time of eating) and the role of cues simultaneously in predicting everyday eating behavior in adults with overweight and obesity. Intensive longitudinal study using ecological momentary assessment (EMA) over 14 days in 51 adults with overweight and obesity (average body mass index = 30.77; SD = 4.85) with a total of 745 participant days of data. Multiple daily assessments of eating (meals, high- or low-energy snacks) and randomly timed assessments. Cues and the momentary food environment were assessed during both assessment types. Random effects multinomial logistic regression shows that both internal (affect) and external (food availability, social situation, observing others eat) cues were associated with increased likelihood of eating. The momentary food environment predicted meals and snacking on top of cues, with a higher likelihood of high-energy snacks when fast food restaurants were close by (odds ratio [OR] = 1.89, 95% confidence interval [CI] = 1.22, 2.93) and a higher likelihood of low-energy snacks in proximity to supermarkets (OR = 2.29, 95% CI = 1.38, 3.82). Real-time eating behavior, both in terms of main meals and snacks, is associated with internal and external cues in adults with overweight and obesity. In addition, perceptions of the momentary food environment influence eating choices, emphasizing the importance of an integrated perspective on eating behavior and obesity prevention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  15. Lack of visual field asymmetries for spatial cueing in reading parafoveal Chinese characters.

    PubMed

    Luo, Chunming; Dell'Acqua, Roberto; Proctor, Robert W; Li, Xingshan

    2015-12-01

    In two experiments, we investigated whether visual field (VF) asymmetries of spatial cueing are involved in reading parafoveal Chinese characters. These characters are different from linearly arranged alphabetic words in that they are logograms that are confined to a constant, square-shaped area and are composed of only a few radicals. We observed a cueing effect, but it did not vary with the VF in which the Chinese character was presented, regardless of whether the cue validity (the ratio of validly to invalidly cued targets) was 1:1 or 7:3. These results suggest that VF asymmetries of spatial cueing do not affect the reading of parafoveal Chinese characters, contrary to the reading of alphabetic words. The mechanisms of spatial attention in reading parafoveal English-like words and Chinese characters are discussed.

  16. Response-specifying cue for action interferes with perception of feature-sharing stimuli.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-06-01

    Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.

  17. Matching cue size and task properties in exogenous attention.

    PubMed

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  18. Biasing spatial attention with semantic information: an event coding approach.

    PubMed

    Amer, Tarek; Gozli, Davood G; Pratt, Jay

    2017-04-21

    We investigated the influence of conceptual processing on visual attention from the standpoint of Theory of Event Coding (TEC). The theory makes two predictions: first, an important factor in determining the influence of event 1 on processing event 2 is whether features of event 1 are bound into a unified representation (i.e., selection or retrieval of event 1). Second, whether processing the two events facilitates or interferes with each other should depend on the extent to which their constituent features overlap. In two experiments, participants performed a visual-attention cueing task, in which the visual target (event 2) was preceded by a relevant or irrelevant explicit (e.g., "UP") or implicit (e.g., "HAPPY") spatial-conceptual cue (event 1). Consistent with TEC, we found relevant explicit cues (which featurally overlap to a greater extent with the target) and implicit cues (which featurally overlap to a lesser extent), respectively, facilitated and interfered with target processing at compatible locations. Irrelevant explicit and implicit cues, on the other hand, both facilitated target processing, presumably because they were less likely selected or retrieved as an integrated and unified event file. We argue that such effects, often described as "attentional cueing", are better accounted for within the event coding framework.

  19. Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space.

    PubMed

    Zang, Xuelian; Shi, Zhuanghua; Müller, Hermann J; Conci, Markus

    2017-05-01

    Learning of spatial inter-item associations can speed up visual search in everyday life, an effect referred to as contextual cueing (Chun & Jiang, 1998). Whereas previous studies investigated contextual cueing primarily using 2D layouts, the current study examined how 3D depth influences contextual learning in visual search. In two experiments, the search items were presented evenly distributed across front and back planes in an initial training session. In the subsequent test session, the search items were either swapped between the front and back planes (Experiment 1) or between the left and right halves (Experiment 2) of the displays. The results showed that repeated spatial contexts were learned efficiently under 3D viewing conditions, facilitating search in the training sessions, in both experiments. Importantly, contextual cueing remained robust and virtually unaffected following the swap of depth planes in Experiment 1, but it was substantially reduced (to nonsignificant levels) following the left-right side swap in Experiment 2. This result pattern indicates that spatial, but not depth, inter-item variations limit effective contextual guidance. Restated, contextual cueing (even under 3D viewing conditions) is primarily based on 2D inter-item associations, while depth-defined spatial regularities are probably not encoded during contextual learning. Hence, changing the depth relations does not impact the cueing effect.

  20. The Effect of Retrieval Cues on Visual Preferences and Memory in Infancy: Evidence for a Four-Phase Attention Function.

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Hernandez-Reif, Maria; Pickens, Jeffrey N.

    1997-01-01

    Tested hypothesis from Bahrick and Pickens' infant attention model that retrieval cues increase memory accessibility and shift visual preferences toward greater novelty to resemble recent memories. Found that after retention intervals associated with remote or intermediate memory, previous familiarity preferences shifted to null or novelty…

  1. Specific and Nonspecific Neural Activity during Selective Processing of Visual Representations in Working Memory

    ERIC Educational Resources Information Center

    Oh, Hwamee; Leung, Hoi-Chung

    2010-01-01

    In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two…

  2. Sound Affects the Speed of Visual Processing

    ERIC Educational Resources Information Center

    Keetels, Mirjam; Vroomen, Jean

    2011-01-01

    The authors examined the effects of a task-irrelevant sound on visual processing. Participants were presented with revolving clocks at or around central fixation and reported the hand position of a target clock at the time an exogenous cue (1 clock turning red) or an endogenous cue (a line pointing toward 1 of the clocks) was presented. A…

  3. Effects of Visual Cues and Self-Explanation Prompts: Empirical Evidence in a Multimedia Environment

    ERIC Educational Resources Information Center

    Lin, Lijia; Atkinson, Robert K.; Savenye, Wilhelmina C.; Nelson, Brian C.

    2016-01-01

    The purpose of this study was to investigate the impacts of visual cues and different types of self-explanation prompts on learning, cognitive load, and intrinsic motivation in an interactive multimedia environment that was designed to deliver a computer-based lesson about the human cardiovascular system. A total of 126 college students were…

  4. Young Children's Visual Attention to Environmental Print as Measured by Eye Tracker Analysis

    ERIC Educational Resources Information Center

    Neumann, Michelle M.; Acosta, Camillia; Neumann, David L.

    2014-01-01

    Environmental print, such as signs and product labels, consist of both print and contextual cues designed to attract the visual attention of the reader. However, contextual cues may draw young children's attention away from the print, thus questioning the value of environmental print in early reading development. Eye tracker technology was used to…

  5. Orienting of Visual Attention among Persons with Autism Spectrum Disorders: Reading versus Responding to Symbolic Cues

    ERIC Educational Resources Information Center

    Landry, Oriane; Mitchell, Peter L.; Burack, Jacob A.

    2009-01-01

    Background: Are persons with autism spectrum disorders (ASD) slower than typically developing individuals to read the meaning of a symbolic cue in a visual orienting paradigm? Methods: Participants with ASD (n = 18) and performance mental age (PMA) matched typically developing children (n = 16) completed two endogenous orienting conditions in…

  6. EFFECTS AND INTERACTIONS OF AUDITORY AND VISUAL CUES IN ORAL COMMUNICATION.

    ERIC Educational Resources Information Center

    KEYS, JOHN W.; AND OTHERS

    VISUAL AND AUDITORY CUES WERE TESTED, SEPARATELY AND JOINTLY, TO DETERMINE THE DEGREE OF THEIR CONTRIBUTION TO IMPROVING OVERALL SPEECH SKILLS OF THE AURALLY HANDICAPPED. EIGHT SOUND INTENSITY LEVELS (FROM 6 TO 15 DECIBELS) WERE USED IN PRESENTING PHONETICALLY BALANCED WORD LISTS AND MULTIPLE-CHOICE INTELLIGIBILITY LISTS TO A SAMPLE OF 24…

  7. Tachistoscopic exposure and masking of real three-dimensional scenes

    PubMed Central

    Pothier, Stephen; Philbeck, John; Chichka, David; Gajewski, Daniel A.

    2010-01-01

    Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking. PMID:19182129

  8. Increasing Hand Washing Compliance With a Simple Visual Cue

    PubMed Central

    Boyer, Brian T.; Menachemi, Nir; Huerta, Timothy R.

    2014-01-01

    We tested the efficacy of a simple, visual cue to increase hand washing with soap and water. Automated towel dispensers in 8 public bathrooms were set to present a towel either with or without activation by users. We set the 2 modes to operate alternately for 10 weeks. Wireless sensors were used to record entry into bathrooms. Towel and soap consumption rates were checked weekly. There were 97 351 hand-washing opportunities across all restrooms. Towel use was 22.6% higher (P = .05) and soap use was 13.3% higher (P = .003) when the dispenser presented the towel without user activation than when activation was required. Results showed that a visual cue can increase hand-washing compliance in public facilities. PMID:24228670

  9. Feasibility and Preliminary Efficacy of Visual Cue Training to Improve Adaptability of Walking after Stroke: Multi-Centre, Single-Blind Randomised Control Pilot Trial

    PubMed Central

    Hollands, Kristen L.; Pelton, Trudy A.; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M.; Wing, Alan M.; Tyson, Sarah F.; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M.

    2015-01-01

    Objectives Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. Design This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services Participants Community dwelling stroke survivors with walking speed <0.8m/s, lower limb paresis and no severe visual impairments Intervention Over-ground visual cue training (O-VCT), Treadmill based visual cue training (T-VCT), and Usual care (UC) delivered by physiotherapists twice weekly for 8 weeks. Main outcome measures: Participants were randomised using computer generated random permutated balanced blocks of randomly varying size. Recruitment, retention, adherence, adverse events and mobility and balance were measured before randomisation, post-intervention and at four weeks follow-up. Results Fifty-six participants participated (18 T-VCT, 19 O-VCT, 19 UC). Thirty-four completed treatment and follow-up assessments. Of the participants that completed, adherence was good with 16 treatments provided over (median of) 8.4, 7.5 and 9 weeks for T-VCT, O-VCT and UC respectively. No adverse events were reported. Post-treatment improvements in walking speed, symmetry, balance and functional mobility were seen in all treatment arms. Conclusions Outpatient based treadmill and over-ground walking adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Trial Registration Clinicaltrials.gov NCT01600391 PMID:26445137

  10. Spatiotemporal gait changes with use of an arm swing cueing device in people with Parkinson's disease.

    PubMed

    Thompson, Elizabeth; Agada, Peter; Wright, W Geoffrey; Reimann, Hendrik; Jeka, John

    2017-10-01

    Impaired arm swing is a common motor symptom of Parkinson's disease (PD), and correlates with other gait impairments and increased risk of falls. Studies suggest that arm swing is not merely a passive consequence of trunk rotation during walking, but an active component of gait. Thus, techniques to enhance arm swing may improve gait characteristics. There is currently no portable device to measure arm swing and deliver immediate cues for larger movement. Here we test report pilot testing of such a device, ArmSense (patented), using a crossover repeated-measures design. Twelve people with PD walked in a video-recorded gym space at self-selected comfortable and fast speeds. After baseline, cues were given either visually using taped targets on the floor to increase step length or through vibrations at the wrist using ArmSense to increase arm swing amplitude. Uncued walking then followed, to assess retention. Subjects successfully reached cueing targets on >95% of steps. At a comfortable pace, step length increased during both visual cueing and ArmSense cueing. However, we observed increased medial-lateral trunk sway with visual cueing, possibly suggesting decreased gait stability. In contrast, no statistically significant changes in trunk sway were observed with ArmSense cues compared to baseline walking. At a fast pace, changes in gait parameters were less systematic. Even though ArmSense cues only specified changes in arm swing amplitude, we observed changes in multiple gait parameters, reflecting the active role arm swing plays in gait and suggesting a new therapeutic path to improve mobility in people with PD. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  12. Audio-visual sensory deprivation degrades visuo-tactile peri-personal space.

    PubMed

    Noel, Jean-Paul; Park, Hyeong-Dong; Pasqualini, Isabella; Lissek, Herve; Wallace, Mark; Blanke, Olaf; Serino, Andrea

    2018-05-01

    Self-perception is scaffolded upon the integration of multisensory cues on the body, the space surrounding the body (i.e., the peri-personal space; PPS), and from within the body. We asked whether reducing information available from external space would change: PPS, interoceptive accuracy, and self-experience. Twenty participants were exposed to 15 min of audio-visual deprivation and performed: (i) a visuo-tactile interaction task measuring their PPS; (ii) a heartbeat perception task measuring interoceptive accuracy; and (iii) a series of questionnaires related to self-perception and mental illness. These tasks were carried out in two conditions: while exposed to a standard sensory environment and under a condition of audio-visual deprivation. Results suggest that while PPS becomes ill defined after audio-visual deprivation, interoceptive accuracy is unaltered at a group-level, with some participants improving and some worsening in interoceptive accuracy. Interestingly, correlational individual differences analyses revealed that changes in PPS after audio-visual deprivation were related to interoceptive accuracy and self-reports of "unusual experiences" on an individual subject basis. Taken together, the findings argue for a relationship between the malleability of PPS, interoceptive accuracy, and an inclination toward aberrant ideation often associated with mental illness. Copyright © 2018. Published by Elsevier Inc.

  13. The effects of visual search efficiency on object-based attention

    PubMed Central

    Rosen, Maya; Cutrone, Elizabeth; Behrmann, Marlene

    2017-01-01

    The attentional prioritization hypothesis of object-based attention (Shomstein & Yantis in Perception & Psychophysics, 64, 41–51, 2002) suggests a two-stage selection process comprising an automatic spatial gradient and flexible strategic (prioritization) selection. The combined attentional priorities of these two stages of object-based selection determine the order in which participants will search the display for the presence of a target. The strategic process has often been likened to a prioritized visual search. By modifying the double-rectangle cueing paradigm (Egly, Driver, & Rafal in Journal of Experimental Psychology: General, 123, 161–177, 1994) and placing it in the context of a larger-scale visual search, we examined how the prioritization search is affected by search efficiency. By probing both targets located on the cued object and targets external to the cued object, we found that the attentional priority surrounding a selected object is strongly modulated by search mode. However, the ordering of the prioritization search is unaffected by search mode. The data also provide evidence that standard spatial visual search and object-based prioritization search may rely on distinct mechanisms. These results provide insight into the interactions between the mode of visual search and object-based selection, and help define the modulatory consequences of search efficiency for object-based attention. PMID:25832192

  14. Motion-base simulator study of control of an externally blown flap STOL transport aircraft after failure of an outboard engine during landing approach

    NASA Technical Reports Server (NTRS)

    Middleton, D. B.; Hurt, G. J., Jr.; Bergeron, H. P.; Patton, J. M., Jr.; Deal, P. L.; Champine, R. A.

    1975-01-01

    A moving-base simulator investigation of the problems of recovery and landing of a STOL aircraft after failure of an outboard engine during final approach was made. The approaches were made at 75 knots along a 6 deg glide slope. The engine was failed at low altitude and the option to go around was not allowed. The aircraft was simulated with each of three control systems, and it had four high-bypass-ratio fan-jet engines exhausting against large triple-slotted wing flaps to produce additional lift. A virtual-image out-the-window television display of a simulated STOL airport was operating during part of the investigation. Also, a simple heads-up flight director display superimposed on the airport landing scene was used by the pilots to make some of the recoveries following an engine failure. The results of the study indicated that the variation in visual cues and/or motion cues had little effect on the outcome of a recovery, but they did have some effect on the pilot's response and control patterns.

  15. [Visual cues as a therapeutic tool in Parkinson's disease. A systematic review].

    PubMed

    Muñoz-Hellín, Elena; Cano-de-la-Cuerda, Roberto; Miangolarra-Page, Juan Carlos

    2013-01-01

    Sensory stimuli or sensory cues are being used as a therapeutic tool for improving gait disorders in Parkinson's disease patients, but most studies seem to focus on auditory stimuli. The aim of this study was to conduct a systematic review regarding the use of visual cues over gait disorders, dual tasks during gait, freezing and the incidence of falls in patients with Parkinson to obtain therapeutic implications. We conducted a systematic review in main databases such as Cochrane Database of Systematic Reviews, TripDataBase, PubMed, Ovid MEDLINE, Ovid EMBASE and Physiotherapy Evidence Database, during 2005 to 2012, according to the recommendations of the Consolidated Standards of Reporting Trials, evaluating the quality of the papers included with the Downs & Black Quality Index. 21 articles were finally included in this systematic review (with a total of 892 participants) with variable methodological quality, achieving an average of 17.27 points in the Downs and Black Quality Index (range: 11-21). Visual cues produce improvements over temporal-spatial parameters in gait, turning execution, reducing the appearance of freezing and falls in Parkinson's disease patients. Visual cues appear to benefit dual tasks during gait, reducing the interference of the second task. Further studies are needed to determine the preferred type of stimuli for each stage of the disease. Copyright © 2012 SEGG. Published by Elsevier Espana. All rights reserved.

  16. The behavioral context of visual displays in common marmosets (Callithrix jacchus).

    PubMed

    de Boer, Raïssa A; Overduin-de Vries, Anne M; Louwerse, Annet L; Sterck, Elisabeth H M

    2013-11-01

    Communication is important in social species, and may occur with the use of visual, olfactory or auditory signals. However, visual communication may be hampered in species that are arboreal have elaborate facial coloring and live in small groups. The common marmoset fits these criteria and may have limited visual communication. Nonetheless, some (contradictive) propositions concerning visual displays in the common marmoset have been made, yet quantitative data are lacking. The aim of this study was to assign a behavioral context to different visual displays using pre-post-event-analyses. Focal observations were conducted on 16 captive adult and sub-adult marmosets in three different family groups. Based on behavioral elements with an unambiguous meaning, four different behavioral contexts were distinguished: aggression, fear, affiliation, and play behavior. Visual displays concerned behavior that included facial expressions, body postures, and pilo-erection of the fur. Visual displays related to aggression, fear, and play/affiliation were consistent with the literature. We propose that the visual display "pilo-erection tip of tail" is related to fear. Individuals receiving these fear signals showed a higher rate of affiliative behavior. This study indicates that several visual displays may provide cues or signals of particular social contexts. Since the three displays of fear elicited an affiliative response, they may communicate a request of anxiety reduction or signal an external referent. Concluding, common marmosets, despite being arboreal and living in small groups, use several visual displays to communicate with conspecifics and their facial coloration may not hamper, but actually promote the visibility of visual displays. © 2013 Wiley Periodicals, Inc.

  17. Iconic memory for the gist of natural scenes.

    PubMed

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Hunger-Dependent Enhancement of Food Cue Responses in Mouse Postrhinal Cortex and Lateral Amygdala.

    PubMed

    Burgess, Christian R; Ramesh, Rohan N; Sugden, Arthur U; Levandowski, Kirsten M; Minnig, Margaret A; Fenselau, Henning; Lowell, Bradford B; Andermann, Mark L

    2016-09-07

    The needs of the body can direct behavioral and neural processing toward motivationally relevant sensory cues. For example, human imaging studies have consistently found specific cortical areas with biased responses to food-associated visual cues in hungry subjects, but not in sated subjects. To obtain a cellular-level understanding of these hunger-dependent cortical response biases, we performed chronic two-photon calcium imaging in postrhinal association cortex (POR) and primary visual cortex (V1) of behaving mice. As in humans, neurons in mouse POR, but not V1, exhibited biases toward food-associated cues that were abolished by satiety. This emergent bias was mirrored by the innervation pattern of amygdalo-cortical feedback axons. Strikingly, these axons exhibited even stronger food cue biases and sensitivity to hunger state and trial history. These findings highlight a direct pathway by which the lateral amygdala may contribute to state-dependent cortical processing of motivationally relevant sensory cues. Published by Elsevier Inc.

  19. Differential effects of visual-spatial attention on response latency and temporal-order judgment.

    PubMed

    Neumann, O; Esselmann, U; Klotz, W

    1993-01-01

    Theorists from both classical structuralism and modern attention research have claimed that attention to a sensory stimulus enhances processing speed. However, they have used different operations to measure this effect, viz., temporal-order judgment (TOJ) and reaction-time (RT) measurement. We report two experiments that compared the effect of a spatial cue on RT and TOJ. Experiment 1 demonstrated that a nonmasked, peripheral cue (the brief brightening of a box) affected both RT and TOJ. However, the former effect was significantly larger than the latter. A masked cue had a smaller, but reliable, effect on TOJ. In Experiment 2, the effects of a masked cue on RT and TOJ were compared under identical stimulus conditions. While the cue had a strong effect on RT, it left TOJ unaffected. These results suggest that a spatial cue may have dissociable effects on response processes and the processes that lead to a conscious percept. Implications for the concept of direct parameter specification and for theories of visual attention are discussed.

  20. An investigation of motion base cueing and G-seat cueing on pilot performance in a simulator

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.

    1983-01-01

    The effect of G-seat cueing (GSC) and motion-base cueing (MBC) on performance of a pursuit-tracking task is studied using the visual motion simulator (VMS) at Langley Research Center. The G-seat, the six-degree-of-freedom synergistic platform motion system, the visual display, the cockpit hardware, and the F-16 aircraft mathematical model are characterized. Each of 8 active F-15 pilots performed the 2-min-43-sec task 10 times for each experimental mode: no cue, GSC, MBC, and GSC + MBC; the results were analyzed statistically in terms of the RMS values of vertical and lateral tracking error. It is shown that lateral error is significantly reduced by either GSC or MBC, and that the combination of cues produces a further, significant decrease. Vertical error is significantly decreased by GSC with or without MBC, whereas MBC effects vary for different pilots. The pattern of these findings is roughly duplicated in measurements of stick force applied for roll and pitch correction.

  1. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Social learning of predators in the dark: understanding the role of visual, chemical and mechanical information.

    PubMed

    Manassa, R P; McCormick, M I; Chivers, D P; Ferrari, M C O

    2013-08-22

    The ability of prey to observe and learn to recognize potential predators from the behaviour of nearby individuals can dramatically increase survival and, not surprisingly, is widespread across animal taxa. A range of sensory modalities are available for this learning, with visual and chemical cues being well-established modes of transmission in aquatic systems. The use of other sensory cues in mediating social learning in fishes, including mechano-sensory cues, remains unexplored. Here, we examine the role of different sensory cues in social learning of predator recognition, using juvenile damselfish (Amphiprion percula). Specifically, we show that a predator-naive observer can socially learn to recognize a novel predator when paired with a predator-experienced conspecific in total darkness. Furthermore, this study demonstrates that when threatened, individuals release chemical cues (known as disturbance cues) into the water. These cues induce an anti-predator response in nearby individuals; however, they do not facilitate learnt recognition of the predator. As such, another sensory modality, probably mechano-sensory in origin, is responsible for information transfer in the dark. This study highlights the diversity of sensory cues used by coral reef fishes in a social learning context.

  3. Global Repetition Influences Contextual Cueing

    PubMed Central

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1–4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect. PMID:29636716

  4. Action Experience Changes Attention to Kinematic Cues

    PubMed Central

    Filippi, Courtney A.; Woodward, Amanda L.

    2016-01-01

    The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-months-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation) about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue) or did not match the orientation of the rod (incongruent cue). To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first) × 2 (congruent kinematic cue vs. incongruent kinematic cue) between-subjects design. We show that 13-months-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics. PMID:26913012

  5. Global Repetition Influences Contextual Cueing.

    PubMed

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Assumpção, Leonardo; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1-4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect.

  6. Exogenous attention influences visual short-term memory in infants.

    PubMed

    Ross-Sheehy, Shannon; Oakes, Lisa M; Luck, Steven J

    2011-05-01

    Two experiments examined the hypothesis that developing visual attentional mechanisms influence infants' Visual Short-Term Memory (VSTM) in the context of multiple items. Five- and 10-month-old infants (N = 76) received a change detection task in which arrays of three differently colored squares appeared and disappeared. On each trial one square changed color and one square was cued; sometimes the cued item was the changing item, and sometimes the changing item was not the cued item. Ten-month-old infants exhibited enhanced memory for the cued item when the cue was a spatial pre-cue (Experiment 1) and 5-month-old infants exhibited enhanced memory for the cued item when the cue was relative motion (Experiment 2). These results demonstrate for the first time that infants younger than 6 months can encode information in VSTM about individual items in multiple-object arrays, and that attention-directing cues influence both perceptual and VSTM encoding of stimuli in infants as in adults.

  7. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    PubMed

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  8. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    PubMed

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  9. How Ants Use Vision When Homing Backward.

    PubMed

    Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine

    2017-02-06

    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    PubMed

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  11. The Relationship between Sitting and the Use of Symmetry As a Cue to Figure-Ground Assignment in 6.5-Month-Old Infants

    PubMed Central

    Ross-Sheehy, Shannon; Perone, Sammy; Vecera, Shaun P.; Oakes, Lisa M.

    2016-01-01

    Two experiments examined the relationship between emerging sitting ability and sensitivity to symmetry as a cue to figure-ground (FG) assignment in 6.5-month-old infants (N = 80). In each experiment, infants who could sit unassisted (as indicated by parental report in Experiment 1 and by an in-lab assessment in Experiment 2) exhibited sensitivity to symmetry as a cue to FG assignment, whereas non-sitting infants did not. Experiment 2 further revealed that sensitivity to this cue is not related to general cognitive abilities as indexed using a non-related visual habituation task. Results demonstrate an important relationship between motor development and visual perception and further suggest that the achievement of important motor milestones such as stable sitting may be related to qualitative changes in sensitivity to monocular depth assignment cues such as symmetry. PMID:27303326

  12. The Relationship between Sitting and the Use of Symmetry As a Cue to Figure-Ground Assignment in 6.5-Month-Old Infants.

    PubMed

    Ross-Sheehy, Shannon; Perone, Sammy; Vecera, Shaun P; Oakes, Lisa M

    2016-01-01

    Two experiments examined the relationship between emerging sitting ability and sensitivity to symmetry as a cue to figure-ground (FG) assignment in 6.5-month-old infants (N = 80). In each experiment, infants who could sit unassisted (as indicated by parental report in Experiment 1 and by an in-lab assessment in Experiment 2) exhibited sensitivity to symmetry as a cue to FG assignment, whereas non-sitting infants did not. Experiment 2 further revealed that sensitivity to this cue is not related to general cognitive abilities as indexed using a non-related visual habituation task. Results demonstrate an important relationship between motor development and visual perception and further suggest that the achievement of important motor milestones such as stable sitting may be related to qualitative changes in sensitivity to monocular depth assignment cues such as symmetry.

  13. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  14. Self-Control and Impulsiveness in Nondieting Adult Human Females: Effects of Visual Food Cues and Food Deprivation

    ERIC Educational Resources Information Center

    Forzano, Lori-Ann B.; Chelonis, John J.; Casey, Caitlin; Forward, Marion; Stachowiak, Jacqueline A.; Wood, Jennifer

    2010-01-01

    Self-control can be defined as the choice of a larger, more delayed reinforcer over a smaller, less delayed reinforcer, and impulsiveness as the opposite. Previous research suggests that exposure to visual food cues affects adult humans' self-control. Previous research also suggests that food deprivation decreases adult humans' self-control. The…

  15. The Effects of Visual Imagery and Keyword Cues on Third-Grade Readers' Memory, Comprehension, and Vocabulary Knowledge

    ERIC Educational Resources Information Center

    Brooker, Heather Rogers

    2013-01-01

    It is estimated that nearly 70% of high school students in the United States need some form of reading remediation, with the most common need being the ability to comprehend the content and significance of the text (Biancarosa & Snow, 2004). Research findings support the use of visual imagery and keyword cues as effective comprehension…

  16. Snack intake is reduced using an implicit, high-level construal cue.

    PubMed

    Price, Menna; Higgs, Suzanne; Lee, Michelle

    2016-08-01

    Priming a high level construal has been shown to enhance self-control and reduce preference for indulgent food. Subtle visual cues have been shown to enhance the effects of a priming procedure. The current study therefore examined the combined impact of construal level and a visual cue reminder on the consumption of energy-dense snacks. A student and community-based adult sample with a wide age and body mass index (BMI) range (N = 176) were randomly assigned to a high or low construal condition in which a novel symbol was embedded. Afterward participants completed a taste test of ad libitum snack foods in the presence or absence of the symbol. The high (vs. the low) construal level prime successfully generated more abstract responses (p < .0001) and reduced intake when the cue-reminder was present (p = .02) but not when it was absent (p = .40). Priming high construal level thinking reduces consumption of high energy dense snacks in the presence of a visual cue-reminder. This may be a practical technique for reducing overeating and has the potential to be extended to other unhealthy behaviors. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. Social categories shape the neural representation of emotion: evidence from a visual face adaptation task

    PubMed Central

    Otten, Marte; Banaji, Mahzarin R.

    2012-01-01

    A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race) are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion). Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race (DR) trials than same race (SR) trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation. PMID:22403531

  18. Commercial versus technical cues to position a new product: Do hedonic and functional/healthy packages differ?

    PubMed

    Vila-López, Natalia; Küster-Boluda, Inés

    2018-02-01

    Packaging attributes can be classified into two main blocks: visual/commercial attributes and informational/technical ones. In this framework, our objectives are: (i) to compare if both kinds of attributes lead to equal responses (consumers' attitudes improvement and product trial) and (ii) to compare if they work equally when a hedonic or a healthy new product is launched into the young market. An experimental design was defined to reach both objectives. Two packaging attributes were manipulated orthogonally to introduce greater variation in people's perceptions: a visual cue (the color) and an informative cue (the claim/label). A third variable was introduced: hedonic (candy bars) versus functional/healthy products (juice with fruit and milk). In a laboratory, 300 young consumers chose and evaluated one of the different packages that were simulated (using different colors and labels). Our results show that both kinds of attributes are significant, but visual cues were more strongly associated with young consumers' positive attitudes towards the product and their intention to buy than technical cues. Results do not differ between the product categories.7. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Ultrasound visual feedback treatment and practice variability for residual speech sound errors

    PubMed Central

    Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin

    2014-01-01

    Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938

  20. Discourse intervention strategies in Alzheimer's disease: Eye-tracking and the effect of visual cues in conversation

    PubMed Central

    Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth

    2014-01-01

    Objective The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). Methods The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. Conclusion This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends. PMID:29213914

Top