Zabierek, Kristina C; Gabor, Caitlin R
2016-09-01
Prey may use multiple sensory channels to detect predators, whose cues may differ in altered sensory environments, such as turbid conditions. Depending on the environment, prey may use cues in an additive/complementary manner or in a compensatory manner. First, to determine whether the purely aquatic Barton Springs salamander, Eurycea sosorum, show an antipredator response to visual cues, we examined their activity when exposed to either visual cues of a predatory fish (Lepomis cyanellus) or a non-predatory fish (Etheostoma lepidum). Salamanders decreased activity in response to predator visual cues only. Then, we examined the antipredator response of these salamanders to all matched and mismatched combinations of chemical and visual cues of the same predatory and non-predatory fish in clear and low turbidity conditions. Salamanders decreased activity in response to predator chemical cues matched with predator visual cues or mismatched with non-predator visual cues. Salamanders also increased latency to first move to predator chemical cues mismatched with non-predator visual cues. Salamanders decreased activity and increased latency to first move more in clear as opposed to turbid conditions in all treatment combinations. Our results indicate that salamanders under all conditions and treatments preferentially rely on chemical cues to determine antipredator behavior, although visual cues are potentially utilized in conjunction for latency to first move. Our results also have potential conservation implications, as decreased antipredator behavior was seen in turbid conditions. These results reveal complexity of antipredator behavior in response to multiple cues under different environmental conditions, which is especially important when considering endangered species. Copyright © 2016 Elsevier B.V. All rights reserved.
Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues
ERIC Educational Resources Information Center
Comeaux, Ian; McDonald, Janet L.
2018-01-01
Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…
Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T
2015-10-01
Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.
ERIC Educational Resources Information Center
Wei, Liew Tze; Sazilah, Salam
2012-01-01
This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…
Lönnstedt, Oona M; Munday, Philip L; McCormick, Mark I; Ferrari, Maud C O; Chivers, Douglas P
2013-09-01
Carbon dioxide (CO2) levels in the atmosphere and surface ocean are rising at an unprecedented rate due to sustained and accelerating anthropogenic CO2 emissions. Previous studies have documented that exposure to elevated CO2 causes impaired antipredator behavior by coral reef fish in response to chemical cues associated with predation. However, whether ocean acidification will impair visual recognition of common predators is currently unknown. This study examined whether sensory compensation in the presence of multiple sensory cues could reduce the impacts of ocean acidification on antipredator responses. When exposed to seawater enriched with levels of CO2 predicted for the end of this century (880 μatm CO2), prey fish completely lost their response to conspecific alarm cues. While the visual response to a predator was also affected by high CO2, it was not entirely lost. Fish exposed to elevated CO2, spent less time in shelter than current-day controls and did not exhibit antipredator signaling behavior (bobbing) when multiple predator cues were present. They did, however, reduce feeding rate and activity levels to the same level as controls. The results suggest that the response of fish to visual cues may partially compensate for the lack of response to chemical cues. Fish subjected to elevated CO2 levels, and exposed to chemical and visual predation cues simultaneously, responded with the same intensity as controls exposed to visual cues alone. However, these responses were still less than control fish simultaneously exposed to chemical and visual predation cues. Consequently, visual cues improve antipredator behavior of CO2 exposed fish, but do not fully compensate for the loss of response to chemical cues. The reduced ability to correctly respond to a predator will have ramifications for survival in encounters with predators in the field, which could have repercussions for population replenishment in acidified oceans.
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
ERIC Educational Resources Information Center
van Moorselaar, Dirk; Olivers, Christian N. L.; Theeuwes, Jan; Lamme, Victor A. F.; Sligte, Ilja G.
2015-01-01
Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM…
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Audio-Visual Speech Perception: A Developmental ERP Investigation
ERIC Educational Resources Information Center
Knowland, Victoria C. P.; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael S. C.
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language…
Exogenous attention influences visual short-term memory in infants.
Ross-Sheehy, Shannon; Oakes, Lisa M; Luck, Steven J
2011-05-01
Two experiments examined the hypothesis that developing visual attentional mechanisms influence infants' Visual Short-Term Memory (VSTM) in the context of multiple items. Five- and 10-month-old infants (N = 76) received a change detection task in which arrays of three differently colored squares appeared and disappeared. On each trial one square changed color and one square was cued; sometimes the cued item was the changing item, and sometimes the changing item was not the cued item. Ten-month-old infants exhibited enhanced memory for the cued item when the cue was a spatial pre-cue (Experiment 1) and 5-month-old infants exhibited enhanced memory for the cued item when the cue was relative motion (Experiment 2). These results demonstrate for the first time that infants younger than 6 months can encode information in VSTM about individual items in multiple-object arrays, and that attention-directing cues influence both perceptual and VSTM encoding of stimuli in infants as in adults.
Task-relevant information is prioritized in spatiotemporal contextual cueing.
Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun
2016-11-01
Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.
EFFECTS AND INTERACTIONS OF AUDITORY AND VISUAL CUES IN ORAL COMMUNICATION.
ERIC Educational Resources Information Center
KEYS, JOHN W.; AND OTHERS
VISUAL AND AUDITORY CUES WERE TESTED, SEPARATELY AND JOINTLY, TO DETERMINE THE DEGREE OF THEIR CONTRIBUTION TO IMPROVING OVERALL SPEECH SKILLS OF THE AURALLY HANDICAPPED. EIGHT SOUND INTENSITY LEVELS (FROM 6 TO 15 DECIBELS) WERE USED IN PRESENTING PHONETICALLY BALANCED WORD LISTS AND MULTIPLE-CHOICE INTELLIGIBILITY LISTS TO A SAMPLE OF 24…
'You see?' Teaching and learning how to interpret visual cues during surgery.
Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei
2015-11-01
The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.
Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E
2016-08-19
Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.
Charboneau, Evonne J.; Dietrich, Mary S.; Park, Sohee; Cao, Aize; Watkins, Tristan J; Blackford, Jennifer U; Benningfield, Margaret M.; Martin, Peter R.; Buchowski, Maciej S.; Cowan, Ronald L.
2013-01-01
Craving is a major motivator underlying drug use and relapse but the neural correlates of cannabis craving are not well understood. This study sought to determine whether visual cannabis cues increase cannabis craving and whether cue-induced craving is associated with regional brain activation in cannabis-dependent individuals. Cannabis craving was assessed in 16 cannabis-dependent adult volunteers while they viewed cannabis cues during a functional MRI (fMRI) scan. The Marijuana Craving Questionnaire was administered immediately before and after each of three cannabis cue-exposure fMRI runs. FMRI blood-oxygenation-level-dependent (BOLD) signal intensity was determined in regions activated by cannabis cues to examine the relationship of regional brain activation to cannabis craving. Craving scores increased significantly following exposure to visual cannabis cues. Visual cues activated multiple brain regions, including inferior orbital frontal cortex, posterior cingulate gyrus, parahippocampal gyrus, hippocampus, amygdala, superior temporal pole, and occipital cortex. Craving scores at baseline and at the end of all three runs were significantly correlated with brain activation during the first fMRI run only, in the limbic system (including amygdala and hippocampus) and paralimbic system (superior temporal pole), and visual regions (occipital cortex). Cannabis cues increased craving in cannabis-dependent individuals and this increase was associated with activation in the limbic, paralimbic, and visual systems during the first fMRI run, but not subsequent fMRI runs. These results suggest that these regions may mediate visually cued aspects of drug craving. This study provides preliminary evidence for the neural basis of cue-induced cannabis craving and suggests possible neural targets for interventions targeted at treating cannabis dependence. PMID:24035535
Overall gloss evaluation in the presence of multiple cues to surface glossiness.
Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter
2012-06-01
Human observers use the information offered by various visual cues when evaluating the glossiness of a surface. Several studies have demonstrated the effect of each single cue to glossiness, but little has been reported on how multiple cues are integrated for the perception of surface gloss. This paper reports on a psychophysical study with real stimuli that are different regarding multiple visual gloss criteria. Four samples were presented to 15 observers under different conditions of illumination in a light booth, resulting in a series of 16 stimuli. Through pairwise comparisons, an overall gloss scale was derived, from which it could be concluded that both differences in the distinctness of the reflected image and differences in luminance affect gloss perception. However, an investigation of the observers' strategy to evaluate gloss indicated a dichotomy among observers. One group of observers used the distinctness-of-image as a principal cue to glossiness, while the second group evaluated gloss primarily from differences in luminance of both the specular highlight and the diffuse background. It could therefore be questioned whether surface gloss can be characterized with one single quantity, or that a set of quantities is necessary to describe the gloss differences between objects.
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
Are multiple visual short-term memory storages necessary to explain the retro-cue effect?
Makovski, Tal
2012-06-01
Recent research has shown that change detection performance is enhanced when, during the retention interval, attention is cued to the location of the upcoming test item. This retro-cue advantage has led some researchers to suggest that visual short-term memory (VSTM) is divided into a durable, limited-capacity storage and a more fragile, high-capacity storage. Consequently, performance is poor on the no-cue trials because fragile VSTM is overwritten by the test display and only durable VSTM is accessible under these conditions. In contrast, performance is improved in the retro-cue condition because attention keeps fragile VSTM accessible. The aim of the present study was to test the assumptions underlying this two-storage account. Participants were asked to encode an array of colors for a change detection task involving no-cue and retro-cue trials. A retro-cue advantage was found even when the cue was presented after a visual (Experiment 1) or a central (Experiment 2) interference. Furthermore, the magnitude of the interference was comparable between the no-cue and retro-cue trials. These data undermine the main empirical support for the two-storage account and suggest that the presence of a retro-cue benefit cannot be used to differentiate between different VSTM storages.
Steady-state visual evoked potentials as a research tool in social affective neuroscience
Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas
2017-01-01
Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794
Enhancing visual search abilities of people with intellectual disabilities.
Li-Tsang, Cecilia W P; Wong, Jackson K K
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.
Nishimura, Akio; Yokosawa, Kazuhiko
2012-01-01
Tlauka and McKenna ( 2000 ) reported a reversal of the traditional stimulus-response compatibility (SRC) effect (faster responding to a stimulus presented on the same side than to one on the opposite side) when the stimulus appearing on one side of a display is a member of a superordinate unit that is largely on the opposite side. We investigated the effects of a visual cue that explicitly shows a superordinate unit, and of assignment of multiple stimuli within each superordinate unit to one response, on the SRC effect based on superordinate unit position. Three experiments revealed that stimulus-response assignment is critical, while the visual cue plays a minor role, in eliciting the SRC effect based on the superordinate unit position. Findings suggest bidirectional interaction between perception and action and simultaneous spatial stimulus coding according to multiple frames of reference, with contribution of each coding to the SRC effect flexibly varying with task situations.
Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J
2011-10-01
Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.
A systematic comparison between visual cues for boundary detection.
Mély, David A; Kim, Junkyung; McGill, Mason; Guo, Yuliang; Serre, Thomas
2016-03-01
The detection of object boundaries is a critical first step for many visual processing tasks. Multiple cues (we consider luminance, color, motion and binocular disparity) available in the early visual system may signal object boundaries but little is known about their relative diagnosticity and how to optimally combine them for boundary detection. This study thus aims at understanding how early visual processes inform boundary detection in natural scenes. We collected color binocular video sequences of natural scenes to construct a video database. Each scene was annotated with two full sets of ground-truth contours (one set limited to object boundaries and another set which included all edges). We implemented an integrated computational model of early vision that spans all considered cues, and then assessed their diagnosticity by training machine learning classifiers on individual channels. Color and luminance were found to be most diagnostic while stereo and motion were least. Combining all cues yielded a significant improvement in accuracy beyond that of any cue in isolation. Furthermore, the accuracy of individual cues was found to be a poor predictor of their unique contribution for the combination. This result suggested a complex interaction between cues, which we further quantified using regularization techniques. Our systematic assessment of the accuracy of early vision models for boundary detection together with the resulting annotated video dataset should provide a useful benchmark towards the development of higher-level models of visual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Differentiating Visual from Response Sequencing during Long-term Skill Learning.
Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy
2017-01-01
The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.
Perception of second- and third-order orientation signals and their interactions
Victor, Jonathan D.; Thengone, Daniel J.; Conte, Mary M.
2013-01-01
Orientation signals, which are crucial to many aspects of visual function, are more complex and varied in the natural world than in the stimuli typically used for laboratory investigation. Gratings and lines have a single orientation, but in natural stimuli, local features have multiple orientations, and multiple orientations can occur even at the same location. Moreover, orientation cues can arise not only from pairwise spatial correlations, but from higher-order ones as well. To investigate these orientation cues and how they interact, we examined segmentation performance for visual textures in which the strengths of different kinds of orientation cues were varied independently, while controlling potential confounds such as differences in luminance statistics. Second-order cues (the kind present in gratings) at different orientations are largely processed independently: There is no cancellation of positive and negative signals at orientations that differ by 45°. Third-order orientation cues are readily detected and interact only minimally with second-order cues. However, they combine across orientations in a different way: Positive and negative signals largely cancel if the orientations differ by 90°. Two additional elements are superimposed on this picture. First, corners play a special role. When second-order orientation cues combine to produce corners, they provide a stronger signal for texture segregation than can be accounted for by their individual effects. Second, while the object versus background distinction does not influence processing of second-order orientation cues, this distinction influences the processing of third-order orientation cues. PMID:23532909
Saunders, Jeffrey A.
2014-01-01
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194
Motivation and short-term memory in visual search: Attention's accelerator revisited.
Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton
2018-05-01
A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.
2014-01-01
This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804
Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C
2014-01-01
This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.
Iconic memory for the gist of natural scenes.
Clarke, Jason; Mack, Arien
2014-11-01
Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.
Bidelman, Gavin M
2016-10-01
Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.
Audio-visual speech perception: a developmental ERP investigation
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
van Moorselaar, Dirk; Olivers, Christian N L; Theeuwes, Jan; Lamme, Victor A F; Sligte, Ilja G
2015-11-01
Visual short-term memory (VSTM) performance is enhanced when the to-be-tested item is cued after encoding. This so-called retro-cue benefit is typically accompanied by a cost for the noncued items, suggesting that information is lost from VSTM upon presentation of a retrospective cue. Here we assessed whether noncued items can be restored to VSTM when made relevant again by a subsequent second cue. We presented either 1 or 2 consecutive retro-cues (80% valid) during the retention interval of a change-detection task. Relative to no cue, a valid cue increased VSTM capacity by 2 items, while an invalid cue decreased capacity by 2. Importantly, when a second, valid cue followed an invalid cue, capacity regained 2 items, so that performance was back on par. In addition, when the second cue was also invalid, there was no extra loss of information from VSTM, suggesting that those items that survived a first invalid cue, automatically also survived a second. We conclude that these results are in support of a very versatile VSTM system, in which memoranda adopt different representational states depending on whether they are deemed relevant now, in the future, or not at all. We discuss a neural model that is consistent with this conclusion. (c) 2015 APA, all rights reserved).
Visual cues and perceived reachability.
Gabbard, Carl; Ammar, Diala
2005-12-01
A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom, also known as the whole-body explanation. The present study examined the role of visual information in the form of binocular and monocular cues in perceived reachability. Right-handed participants judged the reachability of visual targets at midline with both eyes open, dominant eye occluded, and the non-dominant eye covered. Results indicated that participants were relatively accurate with condition responses not being significantly different in regard to total error. Analysis of the direction of error (mean bias) revealed effective accuracy across conditions with only a marginal distinction between monocular and binocular conditions. Therefore, within the task conditions of this experiment, it appears that binocular and monocular cues provide sufficient visual information for effective judgments of perceived reach at midline.
Ultrasound visual feedback treatment and practice variability for residual speech sound errors
Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin
2014-01-01
Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938
Integration trumps selection in object recognition.
Saarela, Toni P; Landy, Michael S
2015-03-30
Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.
Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk
2015-01-01
Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.
Attention to multiple locations is limited by spatial working memory capacity.
Close, Alex; Sapir, Ayelet; Burnett, Katherine; d'Avossa, Giovanni
2014-08-21
What limits the ability to attend several locations simultaneously? There are two possibilities: Either attention cannot be divided without incurring a cost, or spatial memory is limited and observers forget which locations to monitor. We compared motion discrimination when attention was directed to one or multiple locations by briefly presented central cues. The cues were matched for the amount of spatial information they provided. Several random dot kinematograms (RDKs) followed the spatial cues; one of them contained task-relevant, coherent motion. When four RDKs were presented, discrimination accuracy was identical when one and two locations were indicated by equally informative cues. However, when six RDKs were presented, discrimination accuracy was higher following one rather than multiple location cues. We examined whether memory of the cued locations was diminished under these conditions. Recall of the cued locations was tested when participants attended the cued locations and when they did not attend the cued locations. Recall was inaccurate only when the cued locations were attended. Finally, visually marking the cued locations, following one and multiple location cues, equalized discrimination performance, suggesting that participants could attend multiple locations when they did not have to remember which ones to attend. We conclude that endogenously dividing attention between multiple locations is limited by inaccurate recall of the attended locations and that attention poses separate demands on the same central processes used to remember spatial information, even when the locations attended and those held in memory are the same. © 2014 ARVO.
All set! Evidence of simultaneous attentional control settings for multiple target colors.
Irons, Jessica L; Folk, Charles L; Remington, Roger W
2012-06-01
Although models of visual search have often assumed that attention can only be set for a single feature or property at a time, recent studies have suggested that it may be possible to maintain more than one attentional control setting. The aim of the present study was to investigate whether spatial attention could be guided by multiple attentional control settings for color. In a standard spatial cueing task, participants searched for either of two colored targets accompanied by an irrelevantly colored distractor. Across five experiments, results consistently showed that nonpredictive cues matching either target color produced a significant spatial cueing effect, while irrelevantly colored cues did not. This was the case even when the target colors could not be linearly separated from irrelevantly cue colors in color space, suggesting that participants were not simply adopting one general color set that included both target colors. The results could not be explained by intertrial priming by previous targets, nor could they be explained by a single inhibitory set for the distractor color. Overall, the results are most consistent with the maintenance of multiple attentional control settings.
PhyloDet: a scalable visualization tool for mapping multiple traits to large evolutionary trees
Lee, Bongshin; Nachmanson, Lev; Robertson, George; Carlson, Jonathan M.; Heckerman, David
2009-01-01
Summary: Evolutionary biologists are often interested in finding correlations among biological traits across a number of species, as such correlations may lead to testable hypotheses about the underlying function. Because some species are more closely related than others, computing and visualizing these correlations must be done in the context of the evolutionary tree that relates species. In this note, we introduce PhyloDet (short for PhyloDetective), an evolutionary tree visualization tool that enables biologists to visualize multiple traits mapped to the tree. Availability: http://research.microsoft.com/cue/phylodet/ Contact: bongshin@microsoft.com. PMID:19633096
McMenamin, Brenton W.; Marsolek, Chad J.; Morseth, Brianna K.; Speer, MacKenzie F.; Burton, Philip C.; Burgund, E. Darcy
2016-01-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities – an abstract category (AC) subsystem that operates effectively in the left hemisphere, and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad fronto-parietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue/probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations. PMID:26883940
McMenamin, Brenton W; Marsolek, Chad J; Morseth, Brianna K; Speer, MacKenzie F; Burton, Philip C; Burgund, E Darcy
2016-06-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities-an abstract category (AC) subsystem that operates effectively in the left hemisphere and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad frontoparietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue-probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations.
ERIC Educational Resources Information Center
Cubilo, Justin; Winke, Paula
2013-01-01
Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…
Children's Visual Processing of Egocentric Cues in Action Planning for Reach
ERIC Educational Resources Information Center
Cordova, Alberto; Gabbard, Carl
2011-01-01
In this study the authors examined children's ability to code visual information into an egocentric frame of reference for planning reach movements. Children and adults estimated reach distance via motor imagery in immediate and response-delay conditions. Actual maximum reach was compared to estimates in multiple locations in peripersonal and…
NASA Technical Reports Server (NTRS)
Khan, M. Javed; Rossi, Marcia; Heath, Bruce; Ali, Syed F.; Ward, Marcus
2006-01-01
The effects of out-of-the-window cues on learning a straight-in landing approach and a level 360deg turn by novice pilots on a flight simulator have been investigated. The treatments consisted of training with and without visual cues as well as density of visual cues. The performance of the participants was then evaluated through similar but more challenging tasks. It was observed that the participants in the landing study who trained with visual cues performed poorly than those who trained without the cues. However the performance of those who trained with a faded-cues sequence performed slightly better than those who trained without visual cues. In the level turn study it was observed that those who trained with the visual cues performed better than those who trained without visual cues. The study also showed that those participants who trained with a lower density of cues performed better than those who trained with a higher density of visual cues.
Fetsch, Christopher R.
2013-01-01
The richness of perceptual experience, as well as its usefulness for guiding behavior, depends upon the synthesis of information across multiple senses. Recent decades have witnessed a surge in our understanding of how the brain combines sensory signals, or cues. Much of this research has been guided by one of two distinct approaches, one driven primarily by neurophysiological observations, the other guided by principles of mathematical psychology and psychophysics. Conflicting results and interpretations have contributed to a conceptual gap between psychophysical and physiological accounts of cue integration, but recent studies of visual-vestibular cue integration have narrowed this gap considerably. PMID:23686172
[Visual cuing effect for haptic angle judgment].
Era, Ataru; Yokosawa, Kazuhiko
2009-08-01
We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.
Thompson, Elizabeth; Agada, Peter; Wright, W Geoffrey; Reimann, Hendrik; Jeka, John
2017-10-01
Impaired arm swing is a common motor symptom of Parkinson's disease (PD), and correlates with other gait impairments and increased risk of falls. Studies suggest that arm swing is not merely a passive consequence of trunk rotation during walking, but an active component of gait. Thus, techniques to enhance arm swing may improve gait characteristics. There is currently no portable device to measure arm swing and deliver immediate cues for larger movement. Here we test report pilot testing of such a device, ArmSense (patented), using a crossover repeated-measures design. Twelve people with PD walked in a video-recorded gym space at self-selected comfortable and fast speeds. After baseline, cues were given either visually using taped targets on the floor to increase step length or through vibrations at the wrist using ArmSense to increase arm swing amplitude. Uncued walking then followed, to assess retention. Subjects successfully reached cueing targets on >95% of steps. At a comfortable pace, step length increased during both visual cueing and ArmSense cueing. However, we observed increased medial-lateral trunk sway with visual cueing, possibly suggesting decreased gait stability. In contrast, no statistically significant changes in trunk sway were observed with ArmSense cues compared to baseline walking. At a fast pace, changes in gait parameters were less systematic. Even though ArmSense cues only specified changes in arm swing amplitude, we observed changes in multiple gait parameters, reflecting the active role arm swing plays in gait and suggesting a new therapeutic path to improve mobility in people with PD. Copyright © 2017 Elsevier B.V. All rights reserved.
Harrison, Neil R; Woodhouse, Rob
2016-05-01
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.
Bayindir, Mustafa; Bolger, Fergus; Say, Bilge
2016-07-19
Making decisions using judgements of multiple non-deterministic indicators is an important task, both in everyday and professional life. Learning of such decision making has often been studied as the mapping of stimuli (cues) to an environmental variable (criterion); however, little attention has been paid to the effects of situation-by-person interactions on this learning. Accordingly, we manipulated cue and feedback presentation mode (graphic or numeric) and task difficulty, and measured individual differences in working memory capacity (WMC). We predicted that graphic presentation, fewer cues, and elevated WMC would facilitate learning, and that person and task characteristics would interact such that presentation mode compatible with the decision maker's cognitive capability (enhanced visual or verbal WMC) would assist learning, particularly for more difficult tasks. We found our predicted main effects, but no significant interactions, except that those with greater WMC benefited to a larger extent with graphic than with numeric presentation, regardless of which type of working memory was enhanced or number of cues. Our findings suggest that the conclusions of past research based predominantly on tasks using numeric presentation need to be reevaluated and cast light on how working memory helps us learn multiple cue-criterion relationships, with implications for dual-process theories of cognition.
Price, Tom A. R.
2015-01-01
Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal’s success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species. PMID:25849643
Reinforcing Visual Grouping Cues to Communicate Complex Informational Structure.
Bae, Juhee; Watson, Benjamin
2014-12-01
In his book Multimedia Learning [7], Richard Mayer asserts that viewers learn best from imagery that provides them with cues to help them organize new information into the correct knowledge structures. Designers have long been exploiting the Gestalt laws of visual grouping to deliver viewers those cues using visual hierarchy, often communicating structures much more complex than the simple organizations studied in psychological research. Unfortunately, designers are largely practical in their work, and have not paused to build a complex theory of structural communication. If we are to build a tool to help novices create effective and well structured visuals, we need a better understanding of how to create them. Our work takes a first step toward addressing this lack, studying how five of the many grouping cues (proximity, color similarity, common region, connectivity, and alignment) can be effectively combined to communicate structured text and imagery from real world examples. To measure the effectiveness of this structural communication, we applied a digital version of card sorting, a method widely used in anthropology and cognitive science to extract cognitive structures. We then used tree edit distance to measure the difference between perceived and communicated structures. Our most significant findings are: 1) with careful design, complex structure can be communicated clearly; 2) communicating complex structure is best done with multiple reinforcing grouping cues; 3) common region (use of containers such as boxes) is particularly effective at communicating structure; and 4) alignment is a weak structural communicator.
Measuring the effect of multiple eye fixations on memory for visual attributes.
Palmer, J; Ames, C T
1992-09-01
Because of limited peripheral vision, many visual tasks depend on multiple eye fixations. Good performance in such tasks demonstrates that some memory must survive from one fixation to the next. One factor that must influence performance is the degree to which multiple eye fixations interfere with the critical memories. In the present study, the amount of interference was measured by comparing visual discriminations based on multiple fixations to visual discriminations based on a single fixation. The procedure resembled partial report, but used a discrimination measure. In the prototype study, two lines were presented, followed by a single line and a cue. The cue pointed toward one of the positions of the first two lines. Observers were required to judge if the single line in the second display was longer or shorter than the cued line of the first display. These judgments were used to estimate a length threshold. The critical manipulation was to instruct observers either to maintain fixation between the lines of the first display or to fixate each line in sequence. The results showed an advantage for multiple fixations despite the intervening eye movements. In fact, thresholds for the multiple-fixation condition were nearly as good as those in a control condition where the lines were foveally viewed without eye movements. Thus, eye movements had little or no interfering effect in this task. Additional studies generalized the procedure and the stimuli. In conclusion, information about a variety of size and shape attributes was remembered with essentially no interference across eye fixations.
Evidence for a shared representation of sequential cues that engage sign-tracking.
Smedley, Elizabeth B; Smith, Kyle S
2018-06-19
Sign-tracking is a phenomenon whereby cues that predict rewards come to acquire their own motivational value (incentive salience) and attract appetitive behavior. Typically, sign-tracking paradigms have used single auditory, visual, or lever cues presented prior to a reward delivery. Yet, real world examples of events often can be predicted by a sequence of cues. We have shown that animals will sign-track to multiple cues presented in temporal sequence, and with time develop a bias in responding toward a reward distal cue over a reward proximal cue. Further, extinction of responding to the reward proximal cue directly decreases responding to the reward distal cue. One possible explanation of this result is that serial cues become representationally linked with one another. Here we provide further support of this by showing that extinction of responding to a reward distal cue directly reduces responding to a reward proximal cue. We suggest that the incentive salience of one cue can influence the incentive salience of the other cue. Copyright © 2018. Published by Elsevier B.V.
Should visual speech cues (speechreading) be considered when fitting hearing aids?
NASA Astrophysics Data System (ADS)
Grant, Ken
2002-05-01
When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.
Depth Perception of Surgeons in Minimally Invasive Surgery.
Bogdanova, Rositsa; Boulanger, Pierre; Zheng, Bin
2016-10-01
Minimally invasive surgery (MIS) poses visual challenges to the surgeons. In MIS, binocular disparity is not freely available for surgeons, who are required to mentally rebuild the 3-dimensional (3D) patient anatomy from a limited number of monoscopic visual cues. The insufficient depth cues from the MIS environment could cause surgeons to misjudge spatial depth, which could lead to performance errors thus jeopardizing patient safety. In this article, we will first discuss the natural human depth perception by exploring the main depth cues available for surgeons in open procedures. Subsequently, we will reveal what depth cues are lost in MIS and how surgeons compensate for the incomplete depth presentation. Next, we will further expand our knowledge by exploring some of the available solutions for improving depth presentation to surgeons. Here we will review the innovative approaches (multiple 2D camera assembly, shadow introduction) and devices (3D monitors, head-mounted devices, and auto-stereoscopic monitors) for 3D image presentation from the past few years. © The Author(s) 2016.
Ross, M; Lanyon, L J; Viswanathan, J; Manoach, D S; Barton, J J S
2011-11-24
Monkey studies report greater activity in the lateral intraparietal area and more efficient saccades when targets coincide with the location of prior reward cues, even when cue location does not indicate which responses will be rewarded. This suggests that reward can modulate spatial attention and visual selection independent of the "action value" of the motor response. Our goal was first to determine whether reward modulated visual selection similarly in humans, and next, to discover whether reward and penalty differed in effect, if cue effects were greater for cognitively demanding antisaccades, and if financial consequences that were contingent on stimulus location had spatially selective effects. We found that motivational cues reduced all latencies, more for reward than penalty. There was an "inhibition-of-return"-like effect at the location of the cue, but unlike the results in monkeys, cue valence did not modify this effect in prosaccades, and the inhibition-of-return effect was slightly increased rather than decreased in antisaccades. When financial consequences were contingent on target location, locations without reward or penalty consequences lost the benefits seen in noncontingent trials, whereas locations with consequences maintained their gains. We conclude that unlike monkeys, humans show reward effects not on visual selection but on the value of actions. The human saccadic system has both the capacity to enhance responses to multiple locations simultaneously, and the flexibility to focus motivational enhancement only on locations with financial consequences. Reward is more effective than penalty, and both interact with the additional attentional demands of the antisaccade task. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Hierarchical acquisition of visual specificity in spatial contextual cueing.
Lie, Kin-Pou
2015-01-01
Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.
Integration trumps selection in object recognition
Saarela, Toni P.; Landy, Michael S.
2015-01-01
Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154
Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions
NASA Astrophysics Data System (ADS)
Huang, Zhi; Fan, Baozheng; Song, Xiaolin
2018-03-01
As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.
Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.
Kim, Jeesun; Davis, Chris; Groot, Christopher
2009-12-01
This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.
Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.
Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne
2016-05-01
We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Temporal dynamics of encoding, storage and reallocation of visual working memory
Bays, Paul M; Gorgoraptis, Nikos; Wee, Natalie; Marshall, Louise; Husain, Masud
2012-01-01
The process of encoding a visual scene into working memory has previously been studied using binary measures of recall. Here we examine the temporal evolution of memory resolution, based on observers’ ability to reproduce the orientations of objects presented in brief, masked displays. Recall precision was accurately described by the interaction of two independent constraints: an encoding limit that determines the maximum rate at which information can be transferred into memory, and a separate storage limit that determines the maximum fidelity with which information can be maintained. Recall variability decreased incrementally with time, consistent with a parallel encoding process in which visual information from multiple objects accumulates simultaneously in working memory. No evidence was observed for a limit on the number of items stored. Cueing one display item with a brief flash led to rapid development of a recall advantage for that item. This advantage was short-lived if the cue was simply a salient visual event, but was maintained if it indicated an object of particular relevance to the task. These cueing effects were observed even for items that had already been encoded into memory, indicating that limited memory resources can be rapidly reallocated to prioritize salient or goal-relevant information. PMID:21911739
Visual Cues, Verbal Cues and Child Development
ERIC Educational Resources Information Center
Valentini, Nadia
2004-01-01
In this article, the author discusses two strategies--visual cues (modeling) and verbal cues (short, accurate phrases) which are related to teaching motor skills in maximizing learning in physical education classes. Both visual and verbal cues are strong influences in facilitating and promoting day-to-day learning. Both strategies reinforce…
Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco
2013-05-01
Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
2018-04-01
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Circadian timed episodic-like memory - a bee knows what to do when, and also where.
Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu
2007-10-01
This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.
Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.
Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E
2017-11-06
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
How do visual and postural cues combine for self-tilt perception during slow pitch rotations?
Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L
2014-11-01
Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.
Sound arithmetic: auditory cues in the rehabilitation of impaired fact retrieval.
Domahs, Frank; Zamarian, Laura; Delazer, Margarete
2008-04-01
The present single case study describes the rehabilitation of an acquired impairment of multiplication fact retrieval. In addition to a conventional drill approach, one set of problems was preceded by auditory cues while the other half was not. After extensive repetition, non-specific improvements could be observed for all trained problems (e.g., 3 * 7) as well as for their non-trained complementary problems (e.g., 7 * 3). Beyond this general improvement, specific therapy effects were found for problems trained with auditory cues. These specific effects were attributed to an involvement of implicit memory systems and/or attentional processes during training. Thus, the present results demonstrate that cues in the training of arithmetic facts do not have to be visual to be effective.
Visual cue-specific craving is diminished in stressed smokers.
Cochran, Justinn R; Consedine, Nathan S; Lee, John M J; Pandit, Chinmay; Sollers, John J; Kydd, Robert R
2017-09-01
Craving among smokers is increased by stress and exposure to smoking-related visual cues. However, few experimental studies have tested both elicitors concurrently and considered how exposures may interact to influence craving. The current study examined craving in response to stress and visual cue exposure, separately and in succession, in order to better understand the relationship between craving elicitation and the elicitor. Thirty-nine smokers (21 males) who forwent smoking for 30 minutes were randomized to complete a stress task and a visual cue task in counterbalanced orders (creating the experimental groups); for the cue task, counterbalanced blocks of neutral, motivational control, and smoking images were presented. Self-reported craving was assessed after each block of visual stimuli and stress task, and after a recovery period following each task. As expected, the stress and smoking images generated greater craving than neutral or motivational control images (p < .001). Interactions indicated craving in those who completed the stress task first differed from those who completed the visual cues task first (p < .05), such that stress task craving was greater than all image type craving (all p's < .05) only if the visual cue task was completed first. Conversely, craving was stable across image types when the stress task was completed first. Findings indicate when smokers are stressed, visual cues have little additive effect on craving, and different types of visual cues elicit comparable craving. These findings may imply that once stressed, smokers will crave cigarettes comparably notwithstanding whether they are exposed to smoking image cues.
Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus
2016-06-01
Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Jenkin, Michael R; Dyde, Richard T; Jenkin, Heather L; Zacher, James E; Harris, Laurence R
2011-01-01
The perceived direction of up depends on both gravity and visual cues to orientation. Static visual cues to orientation have been shown to be less effective in influencing the perception of upright (PU) under microgravity conditions than they are on earth (Dyde et al., 2009). Here we introduce dynamic orientation cues into the visual background to ascertain whether they might increase the effectiveness of visual cues in defining the PU under different gravity conditions. Brief periods of microgravity and hypergravity were created using parabolic flight. Observers viewed a polarized, natural scene presented at various orientations on a laptop viewed through a hood which occluded all other visual cues. The visual background was either an animated video clip in which actors moved along the visual ground plane or an individual static frame taken from the same clip. We measured the perceptual upright using the oriented character recognition test (OCHART). Dynamic visual cues significantly enhance the effectiveness of vision in determining the perceptual upright under normal gravity conditions. Strong trends were found for dynamic visual cues to produce an increase in the visual effect under both microgravity and hypergravity conditions.
The effect of contextual sound cues on visual fidelity perception.
Rojas, David; Cowan, Brent; Kapralos, Bill; Collins, Karen; Dubrowski, Adam
2014-01-01
Previous work has shown that sound can affect the perception of visual fidelity. Here we build upon this previous work by examining the effect of contextual sound cues (i.e., sounds that are related to the visuals) on visual fidelity perception. Results suggest that contextual sound cues do influence visual fidelity perception and, more specifically, our perception of visual fidelity increases with contextual sound cues. These results have implications for designers of multimodal virtual worlds and serious games that, with the appropriate use of contextual sounds, can reduce visual rendering requirements without a corresponding decrease in the perception of visual fidelity.
A. Smith, Nicholas; A. Folland, Nicholas; Martinez, Diana M.; Trainor, Laurel J.
2017-01-01
Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain et al., 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. PMID:28346869
Directed Forgetting and Directed Remembering in Visual Working Memory
Williams, Melonie; Woodman, Geoffrey F.
2013-01-01
A defining characteristic of visual working memory is its limited capacity. This means that it is crucial to maintain only the most relevant information in visual working memory. However, empirical research is mixed as to whether it is possible to selectively maintain a subset of the information previously encoded into visual working memory. Here we examined the ability of subjects to use cues to either forget or remember a subset of the information already stored in visual working memory. In Experiment 1, participants were cued to either forget or remember one of two groups of colored squares during a change-detection task. We found that both types of cues aided performance in the visual working memory task, but that observers benefited more from a cue to remember than a cue to forget a subset of the objects. In Experiment 2, we show that the previous findings, which indicated that directed-forgetting cues are ineffective, were likely due to the presence of invalid cues that appear to cause observers to disregard such cues as unreliable. In Experiment 3, we recorded event-related potentials (ERPs) and show that an electrophysiological index of focused maintenance is elicited by cues that indicate which subset of information in visual working memory needs to be remembered, ruling out alternative explanations of the behavioral effects of retention-interval cues. The present findings demonstrate that observers can focus maintenance mechanisms on specific objects in visual working memory based on cues indicating future task relevance. PMID:22409182
Bock, Otmar; Bury, Nils
2018-03-01
Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.
Decentralized Multisensory Information Integration in Neural Systems.
Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si
2016-01-13
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.
Decentralized Multisensory Information Integration in Neural Systems
Zhang, Wen-hao; Chen, Aihua
2016-01-01
How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
Temporal and peripheral extraction of contextual cues from scenes during visual search.
Koehler, Kathryn; Eckstein, Miguel P
2017-02-01
Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
2016-02-01
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Hong, Seoyeon; Tandoc, Edson; Kim, Eunjin Anna; Kim, Bokyung; Wise, Kevin
2012-07-01
The purpose of this study was to examine the effects of social cues in self-presentations and the congruence of other-generated comments with the self-presentation in people's evaluations of a profile owner. A 2 (level of social cues: high vs. low) × 2 (congruent vs. incongruent) × 2 (order) × 2 (multiple messages) mixed-subject experiment was conducted with 104 college students. The results showed that a profile owner was perceived less socially attractive when other-generated comments were incongruent with the profile owner's self-presentation. No matter how people package themselves with extravagant self-presentations, it cannot be very successful without validation from others. Interestingly, an interaction effect between congruence and the level of social cues suggested that perceived popularity was low in the incongruent condition regardless of level of social cue. Theoretical and practical implications were also discussed.
NASA Technical Reports Server (NTRS)
Parris, B. L.; Cook, A. M.
1978-01-01
Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.
Manual control of yaw motion with combined visual and vestibular cues
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1977-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Welch, Robert B.
1994-01-01
Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.
Allothetic and idiothetic sensor fusion in rat-inspired robot localization
NASA Astrophysics Data System (ADS)
Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo
2012-06-01
We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Bowles, R. L.
1983-01-01
This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.
Stimulus competition mediates the joint effects of spatial and feature-based attention
White, Alex L.; Rolfs, Martin; Carrasco, Marisa
2015-01-01
Distinct attentional mechanisms enhance the sensory processing of visual stimuli that appear at task-relevant locations and have task-relevant features. We used a combination of psychophysics and computational modeling to investigate how these two types of attention—spatial and feature based—interact to modulate sensitivity when combined in one task. Observers monitored overlapping groups of dots for a target change in color saturation, which they had to localize as being in the upper or lower visual hemifield. Pre-cues indicated the target's most likely location (left/right), color (red/green), or both location and color. We measured sensitivity (d′) for every combination of the location cue and the color cue, each of which could be valid, neutral, or invalid. When three competing saturation changes occurred simultaneously with the target change, there was a clear interaction: The spatial cueing effect was strongest for the cued color, and the color cueing effect was strongest at the cued location. In a second experiment, only the target dot group changed saturation, such that stimulus competition was low. The resulting cueing effects were statistically independent and additive: The color cueing effect was equally strong at attended and unattended locations. We account for these data with a computational model in which spatial and feature-based attention independently modulate the gain of sensory responses, consistent with measurements of cortical activity. Multiple responses then compete via divisive normalization. Sufficient competition creates interactions between the two cueing effects, although the attentional systems are themselves independent. This model helps reconcile seemingly disparate behavioral and physiological findings. PMID:26473316
Media/Device Configurations for Platoon Leader Tactical Training
1985-02-01
munication and visual communication sig- na ls, VInputs to the The device should simulate the real- Platoon Leader time receipt of all tactical voice...communication, audio and visual battle- field cues, and visual communication signals. 14- Table 4 (Continued) Functional Capability Categories and...battlefield cues, and visual communication signals. 0.8 Receipt of limited tactical voice communication, plus audio and visual battlefield cues, and visual
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1981-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C
2011-07-07
Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident.
Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.
2011-01-01
Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident. PMID:21123269
Bailey, Maggie J; Riddoch, M Jane; Crome, Peter
2002-08-01
The presence of unilateral visual neglect (UVN) may adversely affect functional recovery, and rehabilitation strategies that are practical for use in clinical settings are needed. The purpose of this study was to evaluate the use of 2 approaches to reduce UVN in people who have had strokes. Seven elderly patients with stroke and severe left UVN, aged 60 to 85 years, were recruited from a stroke rehabilitation unit. A nonconcurrent, multiple-baselines-across-subjects approach, with an A-B-A treatment-withdrawal single-subject experimental design, was used. Five subjects received a scanning and cueing approach, and 2 subjects received a contralesional limb activation approach, for 10 one-hour sessions. In the former approach, active scanning to the left was encouraged by the therapist, using visual and verbal cues and a mental imagery technique, during reading and copying tasks and simple board games. In the latter approach, functional and goal-oriented left upper-limb activities in neglected hemispace were encouraged. Unilateral visual neglect was examined by a masked (blinded) examiner throughout all phases using the Star Cancellation Test, the Line Bisection Test, and the Baking Tray Task. Data were analyzed using visual and inferential statistical techniques. Both subjects who received limb activation and 3 of the 5 subjects who received scanning and cueing showed a reduction in UVN in one or more tests. This improvement was maintained during the withdrawal phase. Both approaches had a positive effect of reducing aspects of UVN in some subjects relative to no-treatment baselines. However, causality cannot be assured in the absence of controls. The approaches are practical for use in rehabilitation settings. These procedures warrant further replication across subjects, settings, and therapists.
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults
Cortese, Bernadette M.; Uhde, Thomas W.; Brady, Kathleen T.; McClernon, F. Joseph; Yang, Qing X.; Collins, Heather R.; LeMatty, Todd; Hartwell, Karen J.
2015-01-01
Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor + picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multi-sensory, but not unisensory cues, was significantly related to participants’ level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. PMID:26475784
The fMRI BOLD response to unisensory and multisensory smoking cues in nicotine-dependent adults.
Cortese, Bernadette M; Uhde, Thomas W; Brady, Kathleen T; McClernon, F Joseph; Yang, Qing X; Collins, Heather R; LeMatty, Todd; Hartwell, Karen J
2015-12-30
Given that the vast majority of functional magnetic resonance imaging (fMRI) studies of drug cue reactivity use unisensory visual cues, but that multisensory cues may elicit greater craving-related brain responses, the current study sought to compare the fMRI BOLD response to unisensory visual and multisensory, visual plus odor, smoking cues in 17 nicotine-dependent adult cigarette smokers. Brain activation to smoking-related, compared to neutral, pictures was assessed under cigarette smoke and odorless odor conditions. While smoking pictures elicited a pattern of activation consistent with the addiction literature, the multisensory (odor+picture) smoking cues elicited significantly greater and more widespread activation in mainly frontal and temporal regions. BOLD signal elicited by the multisensory, but not unisensory cues, was significantly related to participants' level of control over craving as well. Results demonstrated that the co-presentation of cigarette smoke odor with smoking-related visual cues, compared to the visual cues alone, elicited greater levels of craving-related brain activation in key regions implicated in reward. These preliminary findings support future research aimed at a better understanding of multisensory integration of drug cues and craving. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Prediction and constraint in audiovisual speech perception
Peelle, Jonathan E.; Sommers, Mitchell S.
2015-01-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390
Allen, Justine J; Mäthger, Lydia M; Barbosa, Alexandra; Buresch, Kendra C; Sogin, Emilia; Schwartz, Jillian; Chubb, Charles; Hanlon, Roger T
2010-04-07
Prey camouflage is an evolutionary response to predation pressure. Cephalopods have extensive camouflage capabilities and studying them can offer insight into effective camouflage design. Here, we examine whether cuttlefish, Sepia officinalis, show substrate or camouflage pattern preferences. In the first two experiments, cuttlefish were presented with a choice between different artificial substrates or between different natural substrates. First, the ability of cuttlefish to show substrate preference on artificial and natural substrates was established. Next, cuttlefish were offered substrates known to evoke three main camouflage body pattern types these animals show: Uniform or Mottle (function by background matching); or Disruptive. In a third experiment, cuttlefish were presented with conflicting visual cues on their left and right sides to assess their camouflage response. Given a choice between substrates they might encounter in nature, we found no strong substrate preference except when cuttlefish could bury themselves. Additionally, cuttlefish responded to conflicting visual cues with mixed body patterns in both the substrate preference and split substrate experiments. These results suggest that differences in energy costs for different camouflage body patterns may be minor and that pattern mixing and symmetry may play important roles in camouflage.
Clark, Gavin I; Rock, Adam J; McKeith, Charles F A; Coventry, William L
2017-09-01
Poker-machine gamblers have been demonstrated to report increases in the urge to gamble following exposure to salient gambling cues. However, the processes which contribute to this urge to gamble remain to be understood. The present study aimed to investigate whether changes in the conscious experience of visual imagery, rationality and volitional control (over one's thoughts, images and attention) predicted changes in the urge to gamble following exposure to a gambling cue. Thirty-one regular poker-machine gamblers who reported at least low levels of problem gambling on the Problem Gambling Severity Index (PGSI), were recruited to complete an online cue-reactivity experiment. Participants completed the PGSI, the visual imagery, rationality and volitional control subscales of the Phenomenology of Consciousness Inventory (PCI), and a visual analogue scale (VAS) assessing urge to gamble. Participants completed the PCI subscales and VAS at baseline, following a neutral video cue and following a gambling video cue. Urge to gamble was found to significantly increase from neutral cue to gambling cue (while controlling for baseline urge) and this increase was predicted by PGSI score. After accounting for the effects of problem-gambling severity, cue-reactive visual imagery, rationality and volitional control significantly improved the prediction of cue-reactive urge to gamble. The small sample size and limited participant characteristic data restricts the generalizability of the findings. Nevertheless, this is the first study to demonstrate that changes in the subjective experience of visual imagery, volitional control and rationality predict changes in the urge to gamble from neutral to gambling cue. The results suggest that visual imagery, rationality and volitional control may play an important role in the experience of the urge to gamble in poker-machine gamblers.
ERIC Educational Resources Information Center
Krahmer, Emiel; Swerts, Marc
2007-01-01
Speakers employ acoustic cues (pitch accents) to indicate that a word is important, but may also use visual cues (beat gestures, head nods, eyebrow movements) for this purpose. Even though these acoustic and visual cues are related, the exact nature of this relationship is far from well understood. We investigate whether producing a visual beat…
Visual Features Involving Motion Seen from Airport Control Towers
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion
2010-01-01
Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.
Huerta, Claudia I; Sarkar, Pooja R; Duong, Timothy Q.; Laird, Angela R; Fox, Peter T
2013-01-01
Objective The purpose of this study was to compare the results of the three food-cue paradigms most commonly used for functional neuroimaging studies to determine: i) commonalities and differences in the neural response patterns by paradigm; and, ii) the relative robustness and reliability of responses to each paradigm. Design and Methods functional magnetic resonance imaging (fMRI) studies using standardized stereotactic coordinates to report brain responses to food cues were identified using on-line databases. Studies were grouped by food-cue modality as: i) tastes (8 studies); ii) odors (8 studies); and, iii) images (11 studies). Activation likelihood estimation (ALE) was used to identify statistically reliable regional responses within each stimulation paradigm. Results Brain response distributions were distinctly different for the three stimulation modalities, corresponding to known differences in location of the respective primary and associative cortices. Visual stimulation induced the most robust and extensive responses. The left anterior insula was the only brain region reliably responding to all three stimulus categories. Conclusions These findings suggest visual food-cue paradigm as promising candidate for imaging studies addressing the neural substrate of therapeutic interventions. PMID:24174404
Visual search guidance is best after a short delay
Schmidt, Joseph; Zelinsky, Gregory J.
2011-01-01
Search displays are typically presented immediately after a target cue, but in the real-world, delays often exist between target designation and search. Experiments 1 and 2 asked how search guidance changes with delay. Targets were cued using a picture or text label, each for 3000ms, followed by a delay up to 9000ms before the search display. Search stimuli were realistic objects, and guidance was quantified using multiple eye movement measures. Text-based cues showed a non-significant trend towards greater guidance following any delay relative to a no-delay condition. However, guidance from a pictorial cue increased sharply 300–600 msec after preview offset. Experiment 3 replicated this guidance enhancement using shorter preview durations while equating the time from cue onset to search onset, demonstrating that the guidance benefit is linked to preview offset rather than a more complete encoding of the target. Experiment 4 showed that enhanced guidance persists even with a mask flashed at preview offset, suggesting an explanation other than visual priming. We interpret our findings as evidence for the rapid consolidation of target information into a guiding representation, which attains its maximum effectiveness shortly after preview offset. PMID:21295053
Visual search guidance is best after a short delay.
Schmidt, Joseph; Zelinsky, Gregory J
2011-03-25
Search displays are typically presented immediately after a target cue, but in the real-world, delays often exist between target designation and search. Experiments 1 and 2 asked how search guidance changes with delay. Targets were cued using a picture or text label, each for 3000ms, followed by a delay up to 9000ms before the search display. Search stimuli were realistic objects, and guidance was quantified using multiple eye movement measures. Text-based cues showed a non-significant trend towards greater guidance following any delay relative to a no-delay condition. However, guidance from a pictorial cue increased sharply 300-600ms after preview offset. Experiment 3 replicated this guidance enhancement using shorter preview durations while equating the time from cue onset to search onset, demonstrating that the guidance benefit is linked to preview offset rather than a more complete encoding of the target. Experiment 4 showed that enhanced guidance persists even with a mask flashed at preview offset, suggesting an explanation other than visual priming. We interpret our findings as evidence for the rapid consolidation of target information into a guiding representation, which attains its maximum effectiveness shortly after preview offset. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Katsioloudis, Petros; Jovanovic, Vukica; Jones, Mildred
2016-01-01
Several theorists believe that different types of visual cues influence cognition and behavior through learned associations; however, research provides inconsistent results. Considering this, a quasi-experimental study was done to determine if there are significant positive effects of visual cues (color blue) and to identify if a positive increase…
Exogenous Attention Enables Perceptual Learning
Szpiro, Sarit F. A.; Carrasco, Marisa
2015-01-01
Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745
Visual Cues and Perceived Reachability
ERIC Educational Resources Information Center
Gabbard, Carl; Ammar, Diala
2005-01-01
A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom,…
Visual form predictions facilitate auditory processing at the N1.
Paris, Tim; Kim, Jeesun; Davis, Chris
2017-02-20
Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.
Homeostatic circuits selectively gate food cue responses in insular cortex
Livneh, Yoav; Ramesh, Rohan n.; Burgess, christian R.; Levandowski, Kirsten M.; Madara, Joseph c.; Fenselau, henning; Goldey, Glenn J.; Diaz, Veronica E.; Jikomes, nick; Resch, Jon M.; Lowell, Bradford B.; Andermann, Mark L.
2017-01-01
Physiological needs bias perception and attention to relevant sensory cues. This process is ‘hijacked’ by drug addiction, causing cue-induced cravings and relapse. Similarly, its dysregulation contributes to failed diets, obesity, and eating disorders. Neuroimaging studies in humans have implicated insular cortex in these phenomena. However, it remains unclear how ‘cognitive’ cortical representations of motivationally relevant cues are biased by subcortical circuits that drive specific motivational states. Here we develop a microprism-based cellular imaging approach to monitor visual cue responses in the insular cortex of behaving mice across hunger states. Insular cortex neurons demonstrate food- cue-biased responses that are abolished during satiety. Unexpectedly, while multiple satiety-related visceral signals converge in insular cortex, chemogenetic activation of hypothalamic ‘hunger neurons’ (expressing agouti-related peptide (AgRP)) bypasses these signals to restore hunger-like response patterns in insular cortex. Circuit mapping and pathway-specific manipulations uncover a pathway from AgRP neurons to insular cortex via the paraventricular thalamus and basolateral amygdala. These results reveal a neural basis for state-specific biased processing of motivationally relevant cues. PMID:28614299
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, Robert J.; Benolken, Martha S.
1993-01-01
The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
The effect of contextual cues on the encoding of motor memories.
Howard, Ian S; Wolpert, Daniel M; Franklin, David W
2013-05-01
Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.
Domain general learning: Infants use social and non-social cues when learning object statistics
Barry, Ryan A.; Graf Estes, Katharine; Rivera, Susan M.
2015-01-01
Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants’ attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions. PMID:25999879
Cross-Sensory Transfer of Reference Frames in Spatial Memory
ERIC Educational Resources Information Center
Kelly, Jonathan W.; Avraamides, Marios N.
2011-01-01
Two experiments investigated whether visual cues influence spatial reference frame selection for locations learned through touch. Participants experienced visual cues emphasizing specific environmental axes and later learned objects through touch. Visual cues were manipulated and haptic learning conditions were held constant. Imagined perspective…
Grubert, Anna; Eimer, Martin
2016-08-01
To study whether top-down attentional control processes can be set simultaneously for different visual features, we employed a spatial cueing procedure to measure behavioral and electrophysiological markers of task-set contingent attentional capture during search for targets defined by 1 or 2 possible colors (one-color and two-color tasks). Search arrays were preceded by spatially nonpredictive color singleton cues. Behavioral spatial cueing effects indicative of attentional capture were elicited only by target-matching but not by distractor-color cues. However, when search displays contained 1 target-color and 1 distractor-color object among gray nontargets, N2pc components were triggered not only by target-color but also by distractor-color cues both in the one-color and two-color task, demonstrating that task-set nonmatching items attracted attention. When search displays contained 6 items in 6 different colors, so that participants had to adopt a fully feature-specific task set, the N2pc to distractor-color cues was eliminated in both tasks, indicating that nonmatching items were now successfully excluded from attentional processing. These results demonstrate that when observers adopt a feature-specific search mode, attentional task sets can be configured flexibly for multiple features within the same dimension, resulting in the rapid allocation of attention to task-set matching objects only. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Attentional bias to food-related visual cues: is there a role in obesity?
Doolan, K J; Breslin, G; Hanna, D; Gallagher, A M
2015-02-01
The incentive sensitisation model of obesity suggests that modification of the dopaminergic associated reward systems in the brain may result in increased awareness of food-related visual cues present in the current food environment. Having a heightened awareness of these visual food cues may impact on food choices and eating behaviours with those being most aware of or demonstrating greater attention to food-related stimuli potentially being at greater risk of overeating and subsequent weight gain. To date, research related to attentional responses to visual food cues has been both limited and conflicting. Such inconsistent findings may in part be explained by the use of different methodological approaches to measure attentional bias and the impact of other factors such as hunger levels, energy density of visual food cues and individual eating style traits that may influence visual attention to food-related cues outside of weight status alone. This review examines the various methodologies employed to measure attentional bias with a particular focus on the role that attentional processing of food-related visual cues may have in obesity. Based on the findings of this review, it appears that it may be too early to clarify the role visual attention to food-related cues may have in obesity. Results however highlight the importance of considering the most appropriate methodology to use when measuring attentional bias and the characteristics of the study populations targeted while interpreting results to date and in designing future studies.
Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions
Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang
2012-01-01
Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749
Pritchard, Stephen C.; Zopf, Regine; Polito, Vince; Kaplan, David M.; Williams, Mark A.
2016-01-01
The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual–tactile synchrony, and visual–proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings. PMID:27826275
Heimbauer, Lisa A; Antworth, Rebecca L; Owren, Michael J
2012-01-01
Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present ("positive" cuing) or absent ("negative" cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.
Differential processing of binocular and monocular gloss cues in human visual cortex
Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.
2016-01-01
The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
Inhibition-of-return at multiple locations in visual space.
Wright, R D; Richard, C M
1996-09-01
Inhibition-of-return is thought to be a visual search phenomenon characterized by delayed responses to targets presented at recently cued or recently fixated locations. We studied this inhibition effect following the simultaneous presentation of multiple location cues. The results indicated that response inhibition can be associated with as many as four locations at the same time. This suggests that a purely oculomotor account of inhibition-of-return is oversimplified. In short, although oculomotor processes appear to play a role in inhibition-of-return they may not tell the whole story about how it occurs because we can only program and execute eye movements to one location at a time.
Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception
Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.
2011-01-01
Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344
Feature-based and object-based attention orientation during short-term memory maintenance.
Ku, Yixuan
2015-12-01
Top-down attention biases the short-term memory (STM) processing at multiple stages. Orienting attention during the maintenance period of STM by a retrospective cue (retro-cue) strengthens the representation of the cued item and improves the subsequent STM performance. In a recent article, Backer et al. (Backer KC, Binns MA, Alain C. J Neurosci 35: 1307-1318, 2015) extended these findings from the visual to the auditory domain and combined electroencephalography to dissociate neural mechanisms underlying feature-based and object-based attention orientation. Both event-related potentials and neural oscillations explained the behavioral benefits of retro-cues and favored the theory that feature-based and object-based attention orientation were independent. Copyright © 2015 the American Physiological Society.
The electrophysiological correlate of saliency: evidence from a figure-detection task.
Straube, Sirko; Fahle, Manfred
2010-01-11
Although figure-ground segregation in a natural environment usually relies on multiple cues, we experience a coherent figure without usually noticing the individual single cues. It is still unclear how various cues interact to achieve this unified percept and whether this interaction depends on task demands. Studies investigating the effect of cue combination on the human EEG are still lacking. In the present study, we combined psychophysics, ERP and time-frequency analysis to investigate the interaction of orientation and spatial frequency as visual cues in a figure detection task. The figure was embedded in a matrix of Gabor elements, and we systematically varied figure saliency by changing the underlying cue configuration. We found a strong correlation between the posterior P2 amplitude and the perceived saliency of the figure: the P2 amplitude decreased with increasing saliency. Analogously, the power of the theta-band decreased for more salient figures. At longer latencies, the posterior P3 component was modulated in amplitude and latency, possibly reflecting increased decision confidence at higher saliencies. In conclusion, when the cue composition (e.g. one or two cues) or cue strength is changed in a figure detection task, first differences in the electrophysiological response reflect the perceived saliency and not directly the underlying cue configuration.
Is refreshing in working memory impaired in older age? Evidence from the retro-cue paradigm.
Loaiza, Vanessa M; Souza, Alessandra S
2018-04-10
Impairments in refreshing have been suggested as one source of working memory (WM) deficits in older age. Retro-cues provide an important method of investigating this question: a retro-cue guides attention to one WM item, thereby arguably refreshing it and increasing its accessibility compared with a no-cue baseline. In contrast to the refreshing deficit hypothesis, intact retro-cue benefits have been found in older adults. Refreshing, however, is assumed to boost not one but several WM representations when sequentially applied to them. Hence, intact refreshing requires the flexible switching of attention among WM items. So far, it remains an open question whether older adults show this flexibility. Here, we investigated whether older adults can use multiple cues to sequentially refresh WM representations. Younger and older adults completed a continuous-color delayed-estimation task, in which the number of retro-cues (0, 1, or 2) presented during the retention interval was manipulated. The results showed a similar retro-cue benefit for younger and older adults, even in the two-cue condition in which participants had to switch attention between items to refresh representations in WM. These findings suggest that the capacity to use cues to refresh information in visual WM may be preserved with age. © 2018 New York Academy of Sciences.
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
The Effects of Explicit Visual Cues in Reading Biological Diagrams
ERIC Educational Resources Information Center
Ge, Yun-Ping; Unsworth, Len; Wang, Kuo-Hua
2017-01-01
Drawing on cognitive theories, this study intends to investigate the effects of explicit visual cues which have been proposed as a critical factor in facilitating understanding of biological images. Three diagrams from Taiwanese textbooks with implicit visual cues, involving the concepts of biological classification systems, fish taxonomy, and…
Arnold, S E J; Stevenson, P C; Belmain, S R
2015-08-01
Many insects show a greater attraction to multimodal cues, e.g. odour and colour combined, than to either cue alone. Despite the potential to apply the knowledge to improve control strategies, studies of multiple stimuli have not been undertaken for stored product pest insects. We tested orientation towards a food odour (crushed white maize) in combination with a colour cue (coloured paper with different surface spectral reflectance properties) in three storage pest beetle species, using motion tracking to monitor their behaviour. While the maize weevil, Sitophilus zeamais (Motsch.), showed attraction to both odour and colour stimuli, particularly to both cues in combination, this was not observed in the bostrichid pests Rhyzopertha dominica (F.) (lesser grain borer) or Prostephanus truncatus (Horn) (larger grain borer). The yellow stimulus was particularly attractive to S. zeamais, and control experiments showed that this was neither a result of the insects moving towards darker-coloured areas of the arena, nor their being repelled by optical brighteners in white paper. Visual stimuli may play a role in location of host material by S. zeamais, and can be used to inform trap design for the control or monitoring of maize weevils. The lack of visual responses by the two grain borers is likely to relate to their different host-seeking behaviours and ecological background, which should be taken into account when devising control methods.
Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus
Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.
2013-01-01
Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713
Prediction and constraint in audiovisual speech perception.
Peelle, Jonathan E; Sommers, Mitchell S
2015-07-01
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Toward semantic-based retrieval of visual information: a model-based approach
NASA Astrophysics Data System (ADS)
Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman
2002-07-01
This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.
Elvidge, C K; Macnaughton, C J; Brown, G E
2013-05-01
Prey incorporate multiple forms of publicly available information on predation risk into threat-sensitive antipredator behaviours. Changes in information availability have previously been demonstrated to elicit transient alterations in behavioural patterns, while the effects of long-term deprivation of particular forms of information remain largely unexplored. Damage-released chemical alarm cues from the epidermis of fishes are rendered non-functional under weakly acidic conditions (pH < 6.6), depriving fish of an important source of information on predation risk in acidified waterbodies. We addressed the effects of long-term deprivation on the antipredator responses to different combinations of chemical and visual threat cues via in situ observations of wild, free-swimming 0(+) Atlantic salmon (Salmo salar) fry in four neutral and four weakly acidic nursery streams. In addition, a cross-population transplant experiment and natural interannual variation in acidity enabled the examination of provenance and environment as causes of the observed differences in response. Fish living under weakly acidic conditions demonstrate significantly greater or hypersensitive antipredator responses to visual cues compared to fish under neutral conditions. Under neutral conditions, fish demonstrate complementary (additive or synergistic) effects of paired visual and chemical cues consistent with threat-sensitive responses. Cross-population transplants and interannual comparisons of responses strongly support the conclusion that differences in antipredator responses between neutral and weakly acidic streams result from the loss of chemical information on predation risk, as opposed to population-derived differences in behaviours.
An eye movement analysis of the effect of interruption modality on primary task resumption.
Ratwani, Raj; Trafton, J Gregory
2010-06-01
We examined the effect of interruption modality (visual or auditory) on primary task (visual) resumption to determine which modality was the least disruptive. Theories examining interruption modality have focused on specific periods of the interruption timeline. Preemption theory has focused on the switch from the primary task to the interrupting task. Multiple resource theory has focused on interrupting tasks that are to be performed concurrently with the primary task. Our focus was on examining how interruption modality influences task resumption.We leverage the memory-for-goals theory, which suggests that maintaining an associative link between environmental cues and the suspended primary task goal is important for resumption. Three interruption modality conditions were examined: auditory interruption with the primary task visible, auditory interruption with a blank screen occluding the primary task, and a visual interruption occluding the primary task. Reaction time and eye movement data were collected. The auditory condition with the primary task visible was the least disruptive. Eye movement data suggest that participants in this condition were actively maintaining an associative link between relevant environmental cues on the primary task interface and the suspended primary task goal during the interruption. These data suggest that maintaining cue association is the important factor for reducing the disruptiveness of interruptions, not interruption modality. Interruption-prone computing environments should be designed to allow for the user to have access to relevant primary task cues during an interruption to minimize disruptiveness.
Seeing is believing: information content and behavioural response to visual and chemical cues
Gonzálvez, Francisco G.; Rodríguez-Gironés, Miguel A.
2013-01-01
Predator avoidance and foraging often pose conflicting demands. Animals can decrease mortality risk searching for predators, but searching decreases foraging time and hence intake. We used this principle to investigate how prey should use information to detect, assess and respond to predation risk from an optimal foraging perspective. A mathematical model showed that solitary bees should increase flower examination time in response to predator cues and that the rate of false alarms should be negatively correlated with the relative value of the flower explored. The predatory ant, Oecophylla smaragdina, and the harmless ant, Polyrhachis dives, differ in the profile of volatiles they emit and in their visual appearance. As predicted, the solitary bee Nomia strigata spent more time examining virgin flowers in presence of predator cues than in their absence. Furthermore, the proportion of flowers rejected decreased from morning to noon, as the relative value of virgin flowers increased. In addition, bees responded differently to visual and chemical cues. While chemical cues induced bees to search around flowers, bees detecting visual cues hovered in front of them. These strategies may allow prey to identify the nature of visual cues and to locate the source of chemical cues. PMID:23698013
NASA Technical Reports Server (NTRS)
Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.
1992-01-01
This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.
Beyond scene gist: Objects guide search more than scene background.
Koehler, Kathryn; Eckstein, Miguel P
2017-06-01
Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Guidelines for Creating a Transition Routine: Changing from One Room to Another
ERIC Educational Resources Information Center
McCoy, Kathleen M.; Mathur, Sarup. R.; Czoka, Andrea
2010-01-01
Research strongly suggests that visual supports, such as pictures, prompts, cue cards, story boards, graphic schedules, and diagrams, have been effective for providing instructional structure for a wide variety of children and for multiple activities. Making successful transitions from one activity to another is difficult for many children,…
The Use of Peer Networks to Increase Communicative Acts of Students with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Kamps, Debra; Mason, Rose; Thiemann-Bourque, Kathy; Feldmiller, Sarah; Turcotte, Amy; Miller, Todd
2014-01-01
Peer networks including social groups using typical peers, scripted instruction, visual text cues, and reinforcement were examined with students with autism spectrum disorders (ASD). A multiple baseline design across four participants was used to measure students' use of communication acts with peers during free play following instruction. Peer…
Neural Circuit to Integrate Opposing Motions in the Visual Field.
Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander
2015-07-16
When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.
A unique role of endogenous visual-spatial attention in rapid processing of multiple targets
Guzman, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru
2012-01-01
Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions). We report that endogenous attention uniquely contributes to processing of multiple targets. For speeded visual discrimination, response times are faster for multiple redundant targets than for single targets due to probability summation and/or signal integration. This redundancy gain was unaffected when attention was exogenously diverted from the targets, but was completely eliminated when attention was endogenously diverted. This was not due to weaker manipulation of exogenous attention because our exogenous and endogenous cues similarly affected overall response times. Thus, whereas exogenous attention is superior for processing single targets, endogenous attention plays a unique role in allocating resources crucial for rapid concurrent processing of multiple targets. PMID:21517209
Role of somatosensory and vestibular cues in attenuating visually induced human postural sway
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Benolken, M. S.
1995-01-01
The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.
Differential processing of binocular and monocular gloss cues in human visual cortex.
Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E
2016-06-01
The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.
Appraisal of unimodal cues during agonistic interactions in Maylandia zebra
Ben Ammar, Imen; Fernandez, Marie S.A.; Boyer, Nicolas; Attia, Joël; Fonseca, Paulo J.; Amorim, M. Clara P.; Beauchaud, Marilyn
2017-01-01
Communication is essential during social interactions including animal conflicts and it is often a complex process involving multiple sensory channels or modalities. To better understand how different modalities interact during communication, it is fundamental to study the behavioural responses to both the composite multimodal signal and each unimodal component with adequate experimental protocols. Here we test how an African cichlid, which communicates with multiple senses, responds to different sensory stimuli in a social relevant scenario. We tested Maylandia zebra males with isolated chemical (urine or holding water coming both from dominant males), visual (real opponent or video playback) and acoustic (agonistic sounds) cues during agonistic interactions. We showed that (1) these fish relied mostly on the visual modality, showing increased aggressiveness in response to the sight of a real contestant but no responses to urine or agonistic sounds presented separately, (2) video playback in our study did not appear appropriate to test the visual modality and needs more technical prospecting, (3) holding water provoked territorial behaviours and seems to be promising for the investigation into the role of the chemical channel in this species. Our findings suggest that unimodal signals are non-redundant but how different sensory modalities interplay during communication remains largely unknown in fish. PMID:28785523
Salter, Phia S; Kelley, Nicholas J; Molina, Ludwin E; Thai, Luyen T
2017-09-01
Photographs provide critical retrieval cues for personal remembering, but few studies have considered this phenomenon at the collective level. In this research, we examined the psychological consequences of visual attention to the presence (or absence) of racially charged retrieval cues within American racial segregation photographs. We hypothesised that attention to racial retrieval cues embedded in historical photographs would increase social justice concept accessibility. In Study 1, we recorded gaze patterns with an eye-tracker among participants viewing images that contained racial retrieval cues or were digitally manipulated to remove them. In Study 2, we manipulated participants' gaze behaviour by either directing visual attention toward racial retrieval cues, away from racial retrieval cues, or directing attention within photographs where racial retrieval cues were missing. Across Studies 1 and 2, visual attention to racial retrieval cues in photographs documenting historical segregation predicted social justice concept accessibility.
USDA-ARS?s Scientific Manuscript database
In June and July 2011 traps were deployed in Tuskegee National Forest, Macon County, Alabama to test the influence of chemical and visual cues on for the capture of bark and ambrosia beetles (Coleoptera: Curculionidae: Scolytinae). \\using chemical and visual cues. The first experiment investigated t...
Impact of Visual, Vocal, and Lexical Cues on Judgments of Counselor Qualities
ERIC Educational Resources Information Center
Strahan, Carole; Zytowski, Donald G.
1976-01-01
Undergraduate students (N=130) rated Carl Rogers via visual, lexical, vocal, or vocal-lexical communication channels. Lexical cues were more important in creating favorable impressions among females. Subsequent exposure to combined visual-vocal-lexical cues resulted in warmer and less distant ratings, but not on a consistent basis. (Author)
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
Exogenous Attention Enables Perceptual Learning.
Szpiro, Sarit F A; Carrasco, Marisa
2015-12-01
Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. © The Author(s) 2015.
Ortiz-Ruiz, Alejandra; Postigo, María; Gil-Casanova, Sara; Cuadrado, Daniel; Bautista, José M; Rubio, José Miguel; Luengo-Oroz, Miguel; Linares, María
2018-01-30
Routine field diagnosis of malaria is a considerable challenge in rural and low resources endemic areas mainly due to lack of personnel, training and sample processing capacity. In addition, differential diagnosis of Plasmodium species has a high level of misdiagnosis. Real time remote microscopical diagnosis through on-line crowdsourcing platforms could be converted into an agile network to support diagnosis-based treatment and malaria control in low resources areas. This study explores whether accurate Plasmodium species identification-a critical step during the diagnosis protocol in order to choose the appropriate medication-is possible through the information provided by non-trained on-line volunteers. 88 volunteers have performed a series of questionnaires over 110 images to differentiate species (Plasmodium falciparum, Plasmodium ovale, Plasmodium vivax, Plasmodium malariae, Plasmodium knowlesi) and parasite staging from thin blood smear images digitalized with a smartphone camera adapted to the ocular of a conventional light microscope. Visual cues evaluated in the surveys include texture and colour, parasite shape and red blood size. On-line volunteers are able to discriminate Plasmodium species (P. falciparum, P. malariae, P. vivax, P. ovale, P. knowlesi) and stages in thin-blood smears according to visual cues observed on digitalized images of parasitized red blood cells. Friendly textual descriptions of the visual cues and specialized malaria terminology is key for volunteers learning and efficiency. On-line volunteers with short-training are able to differentiate malaria parasite species and parasite stages from digitalized thin smears based on simple visual cues (shape, size, texture and colour). While the accuracy of a single on-line expert is far from perfect, a single parasite classification obtained by combining the opinions of multiple on-line volunteers over the same smear, could improve accuracy and reliability of Plasmodium species identification in remote malaria diagnosis.
Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan
2018-03-19
Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Neutral location cues and cost/benefit analysis of visual attention shifts.
Wright, R D; Richard, C M; McDonald, J J
1995-12-01
The effects of location cuing on target responses can be examined by comparing informative and neutral cuing conditions. In particular, the magnitudes of costs of invalid location cuing and of benefits of valid location cuing can be determined by comparing invalid and valid cue responses to location-nonspecific neutral cue responses. Cost/benefit analysis is based on the assumption that neutral baseline measures reflect a general warning effect about the impending target's onset but no other specific target information. The experiments we report were carried out to determine the appropriateness of two baseline measures for cost/benefit analyses of direct (nonsymbolic) location cuing effects. We found that a multiple-cue baseline attenuated the benefits of valid cuing, and that a background-flash baseline arbitrarily attenuated costs or benefits depending on flash intensity. It is proposed that a background flash is the more suitable neutral cue because it is target-location-nonspecific, but that its intensity should be adjusted to elicit a target-onset warning signal of the same magnitude as the location cues with which it will be compared.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739
Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.
Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin
2017-01-01
Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.
A magnetoencephalography study of visual processing of pain anticipation.
Machado, Andre G; Gopalakrishnan, Raghavan; Plow, Ela B; Burgess, Richard C; Mosher, John C
2014-07-15
Anticipating pain is important for avoiding injury; however, in chronic pain patients, anticipatory behavior can become maladaptive, leading to sensitization and limiting function. Knowledge of networks involved in pain anticipation and conditioning over time could help devise novel, better-targeted therapies. With the use of magnetoencephalography, we evaluated in 10 healthy subjects the neural processing of pain anticipation. Anticipatory cortical activity elicited by consecutive visual cues that signified imminent painful stimulus was compared with cues signifying nonpainful and no stimulus. We found that the neural processing of visually evoked pain anticipation involves the primary visual cortex along with cingulate and frontal regions. Visual cortex could quickly and independently encode and discriminate between visual cues associated with pain anticipation and no pain during preconscious phases following object presentation. When evaluating the effect of task repetition on participating cortical areas, we found that activity of prefrontal and cingulate regions was mostly prominent early on when subjects were still naive to a cue's contextual meaning. Visual cortical activity was significant throughout later phases. Although visual cortex may precisely and time efficiently decode cues anticipating pain or no pain, prefrontal areas establish the context associated with each cue. These findings have important implications toward processes involved in pain anticipation and maladaptive pain conditioning. Copyright © 2014 the American Physiological Society.
Unconscious cues bias first saccades in a free-saccade task.
Huang, Yu-Feng; Tan, Edlyn Gui Fang; Soon, Chun Siong; Hsieh, Po-Jang
2014-10-01
Visual-spatial attention can be biased towards salient visual information without visual awareness. It is unclear, however, whether such bias can further influence free-choices such as saccades in a free viewing task. In our experiment, we presented visual cues below awareness threshold immediately before people made free saccades. Our results showed that masked cues could influence the direction and latency of the first free saccade, suggesting that salient visual information can unconsciously influence free actions. Copyright © 2014 Elsevier Inc. All rights reserved.
Suggested Interactivity: Seeking Perceived Affordances for Information Visualization.
Boy, Jeremy; Eveillard, Louis; Detienne, Françoise; Fekete, Jean-Daniel
2016-01-01
In this article, we investigate methods for suggesting the interactivity of online visualizations embedded with text. We first assess the need for such methods by conducting three initial experiments on Amazon's Mechanical Turk. We then present a design space for Suggested Interactivity (i. e., visual cues used as perceived affordances-SI), based on a survey of 382 HTML5 and visualization websites. Finally, we assess the effectiveness of three SI cues we designed for suggesting the interactivity of bar charts embedded with text. Our results show that only one cue (SI3) was successful in inciting participants to interact with the visualizations, and we hypothesize this is because this particular cue provided feedforward.
Visual speech influences speech perception immediately but not automatically.
Mitterer, Holger; Reinisch, Eva
2017-02-01
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.
Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul
2016-01-01
Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.
Chemical and visual communication during mate searching in rock shrimp.
Díaz, Eliecer R; Thiel, Martin
2004-06-01
Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.
Can Short Duration Visual Cues Influence Students' Reasoning and Eye Movements in Physics Problems?
ERIC Educational Resources Information Center
Madsen, Adrian; Rouinfar, Amy; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay
2013-01-01
We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the…
Sensitivity to Visual Prosodic Cues in Signers and Nonsigners
ERIC Educational Resources Information Center
Brentari, Diane; Gonzalez, Carolina; Seidl, Amanda; Wilbur, Ronnie
2011-01-01
Three studies are presented in this paper that address how nonsigners perceive the visual prosodic cues in a sign language. In Study 1, adult American nonsigners and users of American Sign Language (ASL) were compared on their sensitivity to the visual cues in ASL Intonational Phrases. In Study 2, hearing, nonsigning American infants were tested…
Enhancing Learning from Dynamic and Static Visualizations by Means of Cueing
ERIC Educational Resources Information Center
Kuhl, Tim; Scheiter, Katharina; Gerjets, Peter
2012-01-01
The current study investigated whether learning from dynamic and two presentation formats for static visualizations can be enhanced by means of cueing. One hundred and fifty university students were randomly assigned to six conditions, resulting from a 2x3-design, with cueing (with/without) and type of visualization (dynamic, static-sequential,…
Con-Text: Text Detection for Fine-grained Object Classification.
Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo
2017-05-24
This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).
Location cue validity affects inhibition of return of visual processing.
Wright, R D; Richard, C M
2000-01-01
Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.
Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J
2013-01-01
Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete
Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.
2013-01-01
Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140
Maloney, Erin K; Cappella, Joseph N
2016-01-01
Visual depictions of vaping in electronic cigarette advertisements may serve as smoking cues to smokers and former smokers, increasing urge to smoke and smoking behavior, and decreasing self-efficacy, attitudes, and intentions to quit or abstain. After assessing baseline urge to smoke, 301 daily smokers, 272 intermittent smokers, and 311 former smokers were randomly assigned to view three e-cigarette commercials with vaping visuals (the cue condition) or without vaping visuals (the no-cue condition), or to answer unrelated media use questions (the no-ad condition). Participants then answered a posttest questionnaire assessing the outcome variables of interest. Relative to other conditions, in the cue condition, daily smokers reported greater urge to smoke a tobacco cigarette and a marginally significantly greater incidence of actually smoking a tobacco cigarette during the experiment. Former smokers in the cue condition reported lower intentions to abstain from smoking than former smokers in other conditions. No significant differences emerged among intermittent smokers across conditions. These data suggest that visual depictions of vaping in e-cigarette commercials increase daily smokers' urge to smoke cigarettes and may lead to more actual smoking behavior. For former smokers, these cues in advertising may undermine abstinence efforts. Intermittent smokers did not appear to be reactive to these cues. A lack of significant differences between participants in the no-cue and no-ad conditions compared to the cue condition suggests that visual depictions of e-cigarettes and vaping function as smoking cues, and cue reactivity is the mechanism through which these effects were obtained.
First-Pass Processing of Value Cues in the Ventral Visual Pathway.
Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E
2018-02-19
Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sight or Scent: Lemur Sensory Reliance in Detecting Food Quality Varies with Feeding Ecology
Rushmore, Julie; Leonhardt, Sara D.; Drea, Christine M.
2012-01-01
Visual and olfactory cues provide important information to foragers, yet we know little about species differences in sensory reliance during food selection. In a series of experimental foraging studies, we examined the relative reliance on vision versus olfaction in three diurnal, primate species with diverse feeding ecologies, including folivorous Coquerel's sifakas (Propithecus coquereli), frugivorous ruffed lemurs (Varecia variegata spp), and generalist ring-tailed lemurs (Lemur catta). We used animals with known color-vision status and foods for which different maturation stages (and hence quality) produce distinct visual and olfactory cues (the latter determined chemically). We first showed that lemurs preferentially selected high-quality foods over low-quality foods when visual and olfactory cues were simultaneously available for both food types. Next, using a novel apparatus in a series of discrimination trials, we either manipulated food quality (while holding sensory cues constant) or manipulated sensory cues (while holding food quality constant). Among our study subjects that showed relatively strong preferences for high-quality foods, folivores required both sensory cues combined to reliably identify their preferred foods, whereas generalists could identify their preferred foods using either cue alone, and frugivores could identify their preferred foods using olfactory, but not visual, cues alone. Moreover, when only high-quality foods were available, folivores and generalists used visual rather than olfactory cues to select food, whereas frugivores used both cue types equally. Lastly, individuals in all three of the study species predominantly relied on sight when choosing between low-quality foods, but species differed in the strength of their sensory biases. Our results generally emphasize visual over olfactory reliance in foraging lemurs, but we suggest that the relative sensory reliance of animals may vary with their feeding ecology. PMID:22870229
The time course of protecting a visual memory representation from perceptual interference
van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian
2015-01-01
Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555
Context-dependent arm pointing adaptation
NASA Technical Reports Server (NTRS)
Seidler, R. D.; Bloomberg, J. J.; Stelmach, G. E.
2001-01-01
We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display; the target orientation group had the home and target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain; the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.
Context-Dependent Arm Pointing Adaptation
NASA Technical Reports Server (NTRS)
Seidler, R. D.; Bloomberg, J. J.; Stelmach, G. E.
2000-01-01
We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display, the target orientation group had the home & target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain, the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue. . compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the.arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.
Multimodal Floral Signals and Moth Foraging Decisions
Riffell, Jeffrey A.; Alarcón, Ruben
2013-01-01
Background Combinations of floral traits – which operate as attractive signals to pollinators – act on multiple sensory modalities. For Manduca sexta hawkmoths, how learning modifies foraging decisions in response to those traits remains untested, and the contribution of visual and olfactory floral displays on behavior remains unclear. Methodology/Principal Findings Using M. sexta and the floral traits of two important nectar resources in southwestern USA, Datura wrightii and Agave palmeri, we examined the relative importance of olfactory and visual signals. Natural visual and olfactory cues from D. wrightii and A. palmeri flowers permits testing the cues at their native intensities and composition – a contrast to many studies that have used artificial stimuli (essential oils, single odorants) that are less ecologically relevant. Results from a series of two-choice assays where the olfactory and visual floral displays were manipulated showed that naïve hawkmoths preferred flowers displaying both olfactory and visual cues. Furthermore, experiments using A. palmeri flowers – a species that is not very attractive to hawkmoths – showed that the visual and olfactory displays did not have synergistic effects. The combination of olfactory and visual display of D. wrightii, however – a flower that is highly attractive to naïve hawkmoths – did influence the time moths spent feeding from the flowers. The importance of the olfactory and visual signals were further demonstrated in learning experiments in which experienced moths, when exposed to uncoupled floral displays, ultimately chose flowers based on the previously experienced olfactory, and not visual, signals. These moths, however, had significantly longer decision times than moths exposed to coupled floral displays. Conclusions/Significance These results highlight the importance of specific sensory modalities for foraging hawkmoths while also suggesting that they learn the floral displays as combinatorial signals and use the integrated floral traits from their memory traces to mediate future foraging decisions. PMID:23991154
Accessing long-term memory representations during visual change detection.
Beck, Melissa R; van Lamsweerde, Amanda E
2011-04-01
In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.
Role of Self-Generated Odor Cues in Contextual Representation
Aikath, Devdeep; Weible, Aldis P; Rowland, David C; Kentros, Clifford G
2014-01-01
As first demonstrated in the patient H.M., the hippocampus is critically involved in forming episodic memories, the recall of “what” happened “where” and “when.” In rodents, the clearest functional correlate of hippocampal primary neurons is the place field: a cell fires predominantly when the animal is in a specific part of the environment, typically defined relative to the available visuospatial cues. However, rodents have relatively poor visual acuity. Furthermore, they are highly adept at navigating in total darkness. This raises the question of how other sensory modalities might contribute to a hippocampal representation of an environment. Rodents have a highly developed olfactory system, suggesting that cues such as odor trails may be important. To test this, we familiarized mice to a visually cued environment over a number of days while maintaining odor cues. During familiarization, self-generated odor cues unique to each animal were collected by re-using absorbent paperboard flooring from one session to the next. Visual and odor cues were then put in conflict by counter-rotating the recording arena and the flooring. Perhaps surprisingly, place fields seemed to follow the visual cue rotation exclusively, raising the question of whether olfactory cues have any influence at all on a hippocampal spatial representation. However, subsequent removal of the familiar, self-generated odor cues severely disrupted both long-term stability and rotation to visual cues in a novel environment. Our data suggest that odor cues, in the absence of additional rule learning, do not provide a discriminative spatial signal that anchors place fields. Such cues do, however, become integral to the context over time and exert a powerful influence on the stability of its hippocampal representation. © 2014 The Authors. Hippocampus Published by Wiley Periodicals, Inc. PMID:24753119
Low-level visual attention and its relation to joint attention in autism spectrum disorder.
Jaworski, Jessica L Bean; Eigsti, Inge-Marie
2017-04-01
Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.
Phasic alertness cues modulate visual processing speed in healthy aging.
Haupt, Marleen; Sorg, Christian; Napiórkowski, Natan; Finke, Kathrin
2018-05-31
Warning signals temporarily increase the rate of visual information in younger participants and thus optimize perception in critical situations. It is unclear whether such important preparatory processes are preserved in healthy aging. We parametrically assessed the effects of auditory alertness cues on visual processing speed and their time course using a whole report paradigm based on the computational Theory of Visual Attention. We replicated prior findings of significant alerting benefits in younger adults. In conditions with short cue-target onset asynchronies, this effect was baseline-dependent. As younger participants with high baseline speed did not show a profit, an inverted U-shaped function of phasic alerting and visual processing speed was implied. Older adults also showed a significant cue-induced benefit. Bayesian analyses indicated that the cueing benefit on visual processing speed was comparably strong across age groups. Our results indicate that in aging individuals, comparable to younger ones, perception is active and increased expectancy of the appearance of a relevant stimulus can increase the rate of visual information uptake. Copyright © 2018 Elsevier Inc. All rights reserved.
The Effects of Spatial Endogenous Pre-cueing across Eccentricities
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353
The Effects of Spatial Endogenous Pre-cueing across Eccentricities.
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.
Selective Memories: Infants' Encoding Is Enhanced in Selection via Suppression
ERIC Educational Resources Information Center
Markant, Julie; Amso, Dima
2013-01-01
The present study examined the hypothesis that inhibitory visual selection mechanisms play a vital role in memory by limiting distractor interference during item encoding. In Experiment 1a we used a modified spatial cueing task in which 9-month-old infants encoded multiple category exemplars in the contexts of an attention orienting mechanism…
Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments
ERIC Educational Resources Information Center
Koutsoklenis, Athanasios; Papadopoulos, Konstantinos
2014-01-01
Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…
Visual/motion cue mismatch in a coordinated roll maneuver
NASA Technical Reports Server (NTRS)
Shirachi, D. K.; Shirley, R. S.
1981-01-01
The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.
Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.
Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu
2015-09-30
Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.
Audio-visual speech cue combination.
Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick
2010-04-16
Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.
Smayda, Kirsten E; Van Engen, Kristin J; Maddox, W Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults
Smayda, Kirsten E.; Van Engen, Kristin J.; Maddox, W. Todd; Chandrasekaran, Bharath
2016-01-01
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18–35) and thirty-three older adults (ages 60–90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener. PMID:27031343
Visual selective attention in amnestic mild cognitive impairment.
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
2014-11-01
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Neural Mechanism for Mirrored Self-face Recognition.
Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta
2015-09-01
Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.
Neural Mechanism for Mirrored Self-face Recognition
Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta
2015-01-01
Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a “virtual mirror” system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. PMID:24770712
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.
2016-01-01
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948
What is the context of contextual cueing?
Makovski, Tal
2016-12-01
People have a powerful ability to extract regularities from noisy environments and to utilize this knowledge to assist in visual search. Extensive research has shown that this ability, termed contextual cueing (CC), is robust and ubiquitous, but it is still unclear what exactly is the context that is being leaned. Researchers have typically focused on how people learn spatial configuration regularities and have hence used simplified, meaningless search stimuli. Here, observers performed visual search tasks using images of real-world objects. The results revealed that, contrary to past findings, the repetition of either arbitrary spatial information or identity information was not sufficient to produce context learning. Instead, learning was found only when both types of information were repeated together. These results were further replicated in hybrid search tasks, in which subjects looked for multiple target templates. Together, these data suggest that CC is more limited than typically assumed, yet this learning is highly robust.
Nackaerts, Evelien; Nieuwboer, Alice; Broeder, Sanne; Smits-Engelsman, Bouwien C M; Swinnen, Stephan P; Vandenberghe, Wim; Heremans, Elke
2016-06-01
Handwriting is often impaired in Parkinson's disease (PD). Several studies have shown that writing in PD benefits from the use of cues. However, this was typically studied with writing and drawing sizes that are usually not used in daily life. This study examines the effect of visual cueing on a prewriting task at small amplitudes (≤1.0 cm) in PD patients and healthy controls to better understand the working action of cueing for writing. A total of 15 PD patients and 15 healthy, age-matched controls performed a prewriting task at 0.6 cm and 1.0 cm in the presence and absence of visual cues (target lines). Writing amplitude, variability of amplitude, and speed were chosen as dependent variables, measured using a newly developed touch-sensitive tablet. Cueing led to immediate improvements in writing size, variability of writing size, and speed in both groups in the 1.0 cm condition. However, when writing at 0.6 cm with cues, a decrease in writing size was apparent in both groups (P < .001) and the difference in variability of amplitude between cued and uncued writing disappeared. In addition, the writing speed of controls decreased when the cue was present. Visual target lines of 1.0 cm improved the writing of sequential loops in contrast to lines spaced at 0.6 cm. These results illustrate that, unlike for gait, visual cueing for fine-motor tasks requires a differentiated approach, taking into account the possible increases of accuracy constraints imposed by cueing. © The Author(s) 2015.
Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A
2017-01-01
External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.
ERIC Educational Resources Information Center
Campbell, Emily; Cuba, Melissa
2015-01-01
The goal of this action research is to increase student awareness of context clues, with an emphasis on student use of visual cues in making predictions. Visual cues in the classroom were used to differentiate according to the needs of student demographics (Herrera, Perez, & Escamilla, 2010). The purpose of this intervention was to improve…
The Effects of Various Fidelity Factors on Simulated Helicopter Hover
1981-01-01
18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation
Learning from Instructional Animations: How Does Prior Knowledge Mediate the Effect of Visual Cues?
ERIC Educational Resources Information Center
Arslan-Ari, I.
2018-01-01
The purpose of this study was to investigate the effects of cueing and prior knowledge on learning and mental effort of students studying an animation with narration. This study employed a 2 (no cueing vs. visual cueing) × 2 (low vs. high prior knowledge) between-subjects factorial design. The results revealed a significant interaction effect…
Anemonefishes rely on visual and chemical cues to correctly identify conspecifics
NASA Astrophysics Data System (ADS)
Johnston, Nicole K.; Dixson, Danielle L.
2017-09-01
Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Laserlight cues for gait freezing in Parkinson's disease: an open-label study.
Donovan, S; Lim, C; Diaz, N; Browner, N; Rose, P; Sudarsky, L R; Tarsy, D; Fahn, S; Simon, D K
2011-05-01
Freezing of gait (FOG) and falls are major sources of disability for Parkinson's disease (PD) patients, and show limited responsiveness to medications. We assessed the efficacy of visual cues for overcoming FOG in an open-label study of 26 patients with PD. The change in the frequency of falls was a secondary outcome measure. Subjects underwent a 1-2 month baseline period of use of a cane or walker without visual cues, followed by 1 month using the same device with the laserlight visual cue. The laserlight visual cue was associated with a modest but significant mean reduction in FOG Questionnaire (FOGQ) scores of 1.25 ± 0.48 (p = 0.0152, two-tailed paired t-test), representing a 6.6% improvement compared to the mean baseline FOGQ scores of 18.8. The mean reduction in fall frequency was 39.5 ± 9.3% with the laserlight visual cue among subjects experiencing at least one fall during the baseline and subsequent study periods (p = 0.002; two-tailed one-sample t-test with hypothesized mean of 0). Though some individual subjects may have benefited, the overall mean performance on the timed gait test (TGT) across all subjects did not significantly change. However, among the 4 subjects who underwent repeated testing of the TGT, one showed a 50% mean improvement in TGT performance with the laserlight visual cue (p = 0.005; two-tailed paired t-test). This open-label study provides evidence for modest efficacy of a laserlight visual cue in overcoming FOG and reducing falls in PD patients. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
Vernetti, Angélina; Smith, Tim J; Senju, Atsushi
2017-03-15
While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue-reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue-reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. © 2017 The Authors.
Contextual cueing impairment in patients with age-related macular degeneration.
Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan
2013-09-12
Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
Visual speech segmentation: using facial cues to locate word boundaries in continuous speech
Mitchel, Aaron D.; Weiss, Daniel J.
2014-01-01
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577
NPSNET: Aural cues for virtual world immersion
NASA Astrophysics Data System (ADS)
Dahl, Leif A.
1992-09-01
NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.
Visual gate for brain-computer interfaces.
Dias, N S; Jacinto, L R; Mendes, P M; Correia, J H
2009-01-01
Brain-Computer Interfaces (BCI) based on event related potentials (ERP) have been successfully developed for applications like virtual spellers and navigation systems. This study tests the use of visual stimuli unbalanced in the subject's field of view to simultaneously cue mental imagery tasks (left vs. right hand movement) and detect subject attention. The responses to unbalanced cues were compared with the responses to balanced cues in terms of classification accuracy. Subject specific ERP spatial filters were calculated for optimal group separation. The unbalanced cues appear to enhance early ERPs related to cue visuospatial processing that improved the classification accuracy (as low as 6%) of ERPs in response to left vs. right cues soon (150-200 ms) after the cue presentation. This work suggests that such visual interface may be of interest in BCI applications as a gate mechanism for attention estimation and validation of control decisions.
Williams, Melonie; Hong, Sang W; Kang, Min-Suk; Carlisle, Nancy B; Woodman, Geoffrey F
2013-04-01
Recent research using change-detection tasks has shown that a directed-forgetting cue, indicating that a subset of the information stored in memory can be forgotten, significantly benefits the other information stored in visual working memory. How do these directed-forgetting cues aid the memory representations that are retained? We addressed this question in the present study by using a recall paradigm to measure the nature of the retained memory representations. Our results demonstrated that a directed-forgetting cue leads to higher-fidelity representations of the remaining items and a lower probability of dropping these representations from memory. Next, we showed that this is made possible by the to-be-forgotten item being expelled from visual working memory following the cue, allowing maintenance mechanisms to be focused on only the items that remain in visual working memory. Thus, the present findings show that cues to forget benefit the remaining information in visual working memory by fundamentally improving their quality relative to conditions in which just as many items are encoded but no cue is provided.
Schlauch, Robert C.; Breiner, Mary J.; Stasiewicz, Paul R.; Christensen, Rita L.; Lang, Alan R.
2012-01-01
Despite the growing recognition for multidimensional assessments of cue-elicited craving, few studies have attempted to measure multiple response domains associated with craving. The present study evaluated the Ambivalence Model of Craving (Breiner et al., 1999; Stritzke et al., 2007) using a unique cue reactivity methodology designed to capture both the desire to use (approach inclination) and desire to not consume (avoidance inclination) in a clinical sample of incarcerated female substance abusers. Participants were 155 incarcerated women who were participating in or waiting to begin participation in a nine-month drug treatment program. Results indicated that all four substance cue-types (alcohol, cigarette, marijuana, and crack cocaine) had good reliability and showed high specificity. Also, the validity of measuring approach and avoidance as separate dimensions was supported, as demonstrated by meaningful clinical distinctions between groups evincing different reactivity patterns and incremental prediction of avoidance inclinations on measures of stages of change readiness. Taken together, results continue to highlight the importance of measuring both approach and avoidance inclinations in the study of cue-elicited craving. PMID:23543075
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
Value associations of irrelevant stimuli modify rapid visual orienting.
Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E
2010-08-01
In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.
Determinants of structural choice in visually situated sentence production.
Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph
2012-11-01
Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.
Selective maintenance in visual working memory does not require sustained visual attention.
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M
2013-08-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change-detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. 2013 APA, all rights reserved
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
ERIC Educational Resources Information Center
Wang, Pei-Yu; Huang, Chung-Kai
2015-01-01
This study aims to explore the impact of learner grade, visual cueing, and control design on children's reading achievement of audio e-books with tablet computers. This research was a three-way factorial design where the first factor was learner grade (grade four and six), the second factor was e-book visual cueing (word-based, line-based, and…
The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.
Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal
2016-01-01
Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.
Automaticity of phasic alertness: Evidence for a three-component model of visual cueing.
Lin, Zhicheng; Lu, Zhong-Lin
2016-10-01
The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue-double cue-is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue-single cue-that is being mixed (80 % vs. 50 % valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, then, top-down influences-in the form of contextual relevance and cue awareness-can have opposite influences on the cueing effect from the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention-orienting, alerting, and inhibition-to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital.
Neural substrates of smoking cue reactivity: A meta-analysis of fMRI studies
Engelmann, Jeffrey M.; Versace, Francesco; Robinson, Jason D.; Minnix, Jennifer A.; Lam, Cho Y.; Cui, Yong; Brown, Victoria L.; Cinciripini, Paul M.
2012-01-01
Reactivity to smoking-related cues may be an important factor that precipitates relapse in smokers who are trying to quit. The neurobiology of smoking cue reactivity has been investigated in several fMRI studies. We combined the results of these studies using activation likelihood estimation, a meta-analytic technique for fMRI data. Results of the meta-analysis indicated that smoking cues reliably evoke larger fMRI responses than neutral cues in the extended visual system, precuneus, posterior cingulate gyrus, anterior cingulate gyrus, dorsal and medial prefrontal cortex, insula, and dorsal striatum. Subtraction meta-analyses revealed that parts of the extended visual system and dorsal prefrontal cortex are more reliably responsive to smoking cues in deprived smokers than in non-deprived smokers, and that short-duration cues presented in event-related designs produce larger responses in the extended visual system than long-duration cues presented in blocked designs. The areas that were found to be responsive to smoking cues agree with theories of the neurobiology of cue reactivity, with two exceptions. First, there was a reliable cue reactivity effect in the precuneus, which is not typically considered a brain region important to addiction. Second, we found no significant effect in the nucleus accumbens, an area that plays a critical role in addiction, but this effect may have been due to technical difficulties associated with measuring fMRI data in that region. The results of this meta-analysis suggest that the extended visual system should receive more attention in future studies of smoking cue reactivity. PMID:22206965
Booth, Ashley J; Elliott, Mark T
2015-01-01
The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.
Gerdes, Antje B. M.; Wieser, Matthias J.; Alpers, Georg W.
2014-01-01
In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research. PMID:25520679
Preschoolers Benefit from Visually Salient Speech Cues
ERIC Educational Resources Information Center
Lalonde, Kaylah; Holt, Rachael Frush
2015-01-01
Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Age-related changes in human posture control: Sensory organization tests
NASA Technical Reports Server (NTRS)
Peterka, R. J.; Black, F. O.
1989-01-01
Postural control was measured in 214 human subjects ranging in age from 7 to 81 years. Sensory organization tests measured the magnitude of anterior-posterior body sway during six 21 s trials in which visual and somatosensory orientation cues were altered (by rotating the visual surround and support surface in proportion to the subject's sway) or vision eliminated (eyes closed) in various combinations. No age-related increase in postural sway was found for subjects standing on a fixed support surface with eyes open or closed. However, age-related increases in sway were found for conditions involving altered visual or somatosensory cues. Subjects older than about 55 years showed the largest sway increases. Subjects younger than about 15 years were also sensitive to alteration of sensory cues. On average, the older subjects were more affected by altered visual cues whereas younger subjects had more difficulty with altered somatosensory cues.
Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu
2018-06-01
Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.
The use of a tactile interface to convey position and motion perceptions
NASA Technical Reports Server (NTRS)
Rupert, A. H.; Guedry, F. E.; Reschke, M. F.
1994-01-01
Under normal terrestrial conditions, perception of position and motion is determined by central nervous system integration of concordant and redundant information from multiple sensory channels (somatosensory, vestibular, visual), which collectively yield vertical perceptions. In the acceleration environment experienced by the pilots, the somatosensory and vestibular sensors frequently present false information concerning the direction of gravity. When presented with conflicting sensory information, it is normal for pilots to experience episodes of disorientation. We have developed a tactile interface that obtains vertical roll and pitch information from a gyro-stabilized attitude indicator and maps this information in a one-to-one correspondence onto the torso of the body using a matrix of vibrotactors. This enables the pilot to continuously maintain an awareness of aircraft attitude without reference to visual cues, utilizing a sensory channel that normally operates at the subconscious level. Although initially developed to improve pilot spatial awareness, this device has obvious applications to 1) simulation and training, 2) nonvisual tracking of targets, which can reduce the need for pilots to make head movements in the high-G environment of aerial combat, and 3) orientation in environments with minimal somatosensory cues (e.g., underwater) or gravitational cues (e.g., space).
The role of visuohaptic experience in visually perceived depth.
Ho, Yun-Xian; Serwe, Sascha; Trommershäuser, Julia; Maloney, Laurence T; Landy, Michael S
2009-06-01
Berkeley suggested that "touch educates vision," that is, haptic input may be used to calibrate visual cues to improve visual estimation of properties of the world. Here, we test whether haptic input may be used to "miseducate" vision, causing observers to rely more heavily on misleading visual cues. Human subjects compared the depth of two cylindrical bumps illuminated by light sources located at different positions relative to the surface. As in previous work using judgments of surface roughness, we find that observers judge bumps to have greater depth when the light source is located eccentric to the surface normal (i.e., when shadows are more salient). Following several sessions of visual judgments of depth, subjects then underwent visuohaptic training in which haptic feedback was artificially correlated with the "pseudocue" of shadow size and artificially decorrelated with disparity and texture. Although there were large individual differences, almost all observers demonstrated integration of haptic cues during visuohaptic training. For some observers, subsequent visual judgments of bump depth were unaffected by the training. However, for 5 of 12 observers, training significantly increased the weight given to pseudocues, causing subsequent visual estimates of shape to be less veridical. We conclude that haptic information can be used to reweight visual cues, putting more weight on misleading pseudocues, even when more trustworthy visual cues are available in the scene.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.
1977-01-01
Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.
Social Vision: Functional Forecasting and the Integration of Compound Social Cues
Adams, Reginald B.; Kveraga, Kestutis
2017-01-01
For decades the study of social perception was largely compartmentalized by type of social cue: race, gender, emotion, eye gaze, body language, facial expression etc. This was partly due to good scientific practice (e.g., controlling for extraneous variability), and partly due to assumptions that each type of social cue was functionally distinct from others. Herein, we present a functional forecast approach to understanding compound social cue processing that emphasizes the importance of shared social affordances across various cues (see too Adams, Franklin, Nelson, & Stevenson, 2010; Adams & Nelson, 2011; Weisbuch & Adams, 2012). We review the traditional theories of emotion and face processing that argued for dissociable and noninteracting pathways (e.g., for specific emotional expressions, gaze, identity cues), as well as more recent evidence for combinatorial processing of social cues. We argue here that early, and presumably reflexive, visual integration of such cues is necessary for adaptive behavioral responding to others. In support of this claim, we review contemporary work that reveals a flexible visual system, one that readily incorporates meaningful contextual influences in even nonsocial visual processing, thereby establishing the functional and neuroanatomical bases necessary for compound social cue integration. Finally, we explicate three likely mechanisms driving such integration. Together, this work implicates a role for cognitive penetrability in visual perceptual abilities that have often been (and in some cases still are) ascribed to direct encapsulated perceptual processes. PMID:29242738
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
Grubert, Anna; Carlisle, Nancy B; Eimer, Martin
2016-12-01
The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.
Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.
2017-01-01
External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862
Sato, Naoyuki; Yamaguchi, Yoko
2009-06-01
The human cognitive map is known to be hierarchically organized consisting of a set of perceptually clustered landmarks. Patient studies have demonstrated that these cognitive maps are maintained by the hippocampus, while the neural dynamics are still poorly understood. The authors have shown that the neural dynamic "theta phase precession" observed in the rodent hippocampus may be capable of forming hierarchical cognitive maps in humans. In the model, a visual input sequence consisting of object and scene features in the central and peripheral visual fields, respectively, results in the formation of a hierarchical cognitive map for object-place associations. Surprisingly, it is possible for such a complex memory structure to be formed in a few seconds. In this paper, we evaluate the memory retrieval of object-place associations in the hierarchical network formed by theta phase precession. The results show that multiple object-place associations can be retrieved with the initial cue of a scene input. Importantly, according to the wide-to-narrow unidirectional connections among scene units, the spatial area for object-place retrieval can be controlled by the spatial area of the initial cue input. These results indicate that the hierarchical cognitive maps have computational advantages on a spatial-area selective retrieval of multiple object-place associations. Theta phase precession dynamics is suggested as a fundamental neural mechanism of the human cognitive map.
Are face representations depth cue invariant?
Dehmoobadsharifabadi, Armita; Farivar, Reza
2016-06-01
The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.
The company objects keep: Linking referents together during cross-situational word learning.
Zettersten, Martin; Wojcik, Erica; Benitez, Viridiana L; Saffran, Jenny
2018-04-01
Learning the meanings of words involves not only linking individual words to referents but also building a network of connections among entities in the world, concepts, and words. Previous studies reveal that infants and adults track the statistical co-occurrence of labels and objects across multiple ambiguous training instances to learn words. However, it is less clear whether, given distributional or attentional cues, learners also encode associations amongst the novel objects. We investigated the consequences of two types of cues that highlighted object-object links in a cross-situational word learning task: distributional structure - how frequently the referents of novel words occurred together - and visual context - whether the referents were seen on matching backgrounds. Across three experiments, we found that in addition to learning novel words, adults formed connections between frequently co-occurring objects. These findings indicate that learners exploit statistical regularities to form multiple types of associations during word learning.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion B.
2011-01-01
Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.
Helicopter flight simulation motion platform requirements
NASA Astrophysics Data System (ADS)
Schroeder, Jeffery Allyn
Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.
Heuristics of Reasoning and Analogy in Children's Visual Perspective Taking.
ERIC Educational Resources Information Center
Yaniv, Ilan; Shatz, Marilyn
1990-01-01
In three experiments, children of three through six years of age were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight was salient. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed to objects facilitated…
Enhancing Interactive Tutorial Effectiveness through Visual Cueing
ERIC Educational Resources Information Center
Jamet, Eric; Fernandez, Jonathan
2016-01-01
The present study investigated whether learning how to use a web service with an interactive tutorial can be enhanced by cueing. We expected the attentional guidance provided by visual cues to facilitate the selection of information in static screen displays that corresponded to spoken explanations. Unlike most previous studies in this area, we…
The Influence of Alertness on Spatial and Nonspatial Components of Visual Attention
ERIC Educational Resources Information Center
Matthias, Ellen; Bublak, Peter; Muller, Hermann J.; Schneider, Werner X.; Krummenacher, Joseph; Finke, Kathrin
2010-01-01
Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus…
Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load
ERIC Educational Resources Information Center
Santangelo, Valerio; Spence, Charles
2007-01-01
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; Grossmann, Tobias
2015-01-01
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…
Keough, Dwayne
2011-01-01
Research on the control of visually guided limb movements indicates that the brain learns and continuously updates an internal model that maps the relationship between motor commands and sensory feedback. A growing body of work suggests that an internal model that relates motor commands to sensory feedback also supports vocal control. There is evidence from arm-reaching studies that shows that when provided with a contextual cue, the motor system can acquire multiple internal models, which allows an animal to adapt to different perturbations in diverse contexts. In this study we show that trained singers can rapidly acquire multiple internal models regarding voice fundamental frequency (F0). These models accommodate different perturbations to ongoing auditory feedback. Participants heard three musical notes and reproduced each one in succession. The musical targets could serve as a contextual cue to indicate which direction (up or down) feedback would be altered on each trial; however, participants were not explicitly instructed to use this strategy. When participants were gradually exposed to altered feedback adaptation was observed immediately following vocal onset. Aftereffects were target specific and did not influence vocal productions on subsequent trials. When target notes were no longer a contextual cue, adaptation occurred during altered feedback trials and evidence for trial-by-trial adaptation was found. These findings indicate that the brain is exceptionally sensitive to the deviations between auditory feedback and the predicted consequence of a motor command during vocalization. Moreover, these results indicate that, with contextual cues, the vocal control system may maintain multiple internal models that are capable of independent modification during different tasks or environments. PMID:21346208
Goebl, Werner
2015-01-01
Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610
NASA Astrophysics Data System (ADS)
Ramirez, Joshua; Mann, Virginia
2005-08-01
Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.
A model for the pilot's use of motion cues in roll-axis tracking tasks
NASA Technical Reports Server (NTRS)
Levison, W. H.; Junker, A. M.
1977-01-01
Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.
Selective Maintenance in Visual Working Memory Does Not Require Sustained Visual Attention
Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.
2012-01-01
In four experiments, we tested whether sustained visual attention is required for the selective maintenance of objects in VWM. Participants performed a color change-detection task. During the retention interval, a valid cue indicated the item that would be tested. Change detection performance was higher in the valid-cue condition than in a neutral-cue control condition. To probe the role of visual attention in the cuing effect, on half of the trials, a difficult search task was inserted after the cue, precluding sustained attention on the cued item. The addition of the search task produced no observable decrement in the magnitude of the cuing effect. In a complementary test, search efficiency was not impaired by simultaneously prioritizing an object for retention in VWM. The results demonstrate that selective maintenance in VWM can be dissociated from the locus of visual attention. PMID:23067118
Tünnermann, Jan; Scharlau, Ingrid
2016-01-01
Peripheral visual cues lead to large shifts in psychometric distributions of temporal-order judgments. In one view, such shifts are attributed to attention speeding up processing of the cued stimulus, so-called prior entry. However, sometimes these shifts are so large that it is unlikely that they are caused by attention alone. Here we tested the prevalent alternative explanation that the cue is sometimes confused with the target on a perceptual level, bolstering the shift of the psychometric function. We applied a novel model of cued temporal-order judgments, derived from Bundesen's Theory of Visual Attention. We found that cue-target confusions indeed contribute to shifting psychometric functions. However, cue-induced changes in the processing rates of the target stimuli play an important role, too. At smaller cueing intervals, the cue increased the processing speed of the target. At larger intervals, inhibition of return was predominant. Earlier studies of cued TOJs were insensitive to these effects because in psychometric distributions they are concealed by the conjoint effects of cue-target confusions and processing rate changes.
Dissociating emotion-induced blindness and hypervision.
Bocanegra, Bruno R; Zeelenberg, René
2009-12-01
Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.
Campbell, Dana L M; Hauber, Mark E
2009-08-01
Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.
Neural substrates of resisting craving during cigarette cue exposure.
Brody, Arthur L; Mandelkern, Mark A; Olmstead, Richard E; Jou, Jennifer; Tiongson, Emmanuelle; Allen, Valerie; Scheibal, David; London, Edythe D; Monterosso, John R; Tiffany, Stephen T; Korb, Alex; Gan, Joanna J; Cohen, Mark S
2007-09-15
In cigarette smokers, the most commonly reported areas of brain activation during visual cigarette cue exposure are the prefrontal, anterior cingulate, and visual cortices. We sought to determine changes in brain activity in response to cigarette cues when smokers actively resist craving. Forty-two tobacco-dependent smokers underwent functional magnetic resonance imaging, during which they were presented with videotaped cues. Three cue presentation conditions were tested: cigarette cues with subjects allowing themselves to crave (cigarette cue crave), cigarette cues with the instruction to resist craving (cigarette cue resist), and matched neutral cues. Activation was found in the cigarette cue resist (compared with the cigarette cue crave) condition in the left dorsal anterior cingulate cortex (ACC), posterior cingulate cortex (PCC), and precuneus. Lower magnetic resonance signal for the cigarette cue resist condition was found in the cuneus bilaterally, left lateral occipital gyrus, and right postcentral gyrus. These relative activations and deactivations were more robust when the cigarette cue resist condition was compared with the neutral cue condition. Suppressing craving during cigarette cue exposure involves activation of limbic (and related) brain regions and deactivation of primary sensory and motor cortices.
Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M
2009-01-01
Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.
Visual spatial cue use for guiding orientation in two-to-three-year-old children
van den Brink, Danielle; Janzen, Gabriele
2013-01-01
In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2–3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences. PMID:24368903
Visual spatial cue use for guiding orientation in two-to-three-year-old children.
van den Brink, Danielle; Janzen, Gabriele
2013-01-01
In spatial development representations of the environment and the use of spatial cues change over time. To date, the influence of individual differences in skills relevant for orientation and navigation has not received much attention. The current study investigated orientation abilities on the basis of visual spatial cues in 2-3-year-old children, and assessed factors that possibly influence spatial task performance. Thirty-month and 35-month-olds performed an on-screen Virtual Reality (VR) orientation task searching for an animated target in the presence of visual self-movement cues and landmark information. Results show that, in contrast to 30-month-old children, 35-month-olds were successful in using visual spatial cues for maintaining orientation. Neither age group benefited from landmarks present in the environment, suggesting that successful task performance relied on the use of optic flow cues, rather than object-to-object relations. Analysis of individual differences revealed that 2-year-olds who were relatively more independent in comparison to their peers, as measured by the daily living skills scale of the parental questionnaire Vineland-Screener were most successful at the orientation task. These results support previous findings indicating that the use of various spatial cues gradually improves during early childhood. Our data show that a developmental transition in spatial cue use can be witnessed within a relatively short period of 5 months only. Furthermore, this study indicates that rather than chronological age, individual differences may play a role in successful use of visual cues for spatial updating in an orientation task. Future studies are necessary to assess the exact nature of these individual differences.
Spielvogel, Ines; Matthes, Jörg; Naderer, Brigitte; Karsay, Kathrin
2018-06-01
Based on cue reactivity theory, food cues embedded in media content can lead to physiological and psychological responses in children. Research suggests that unhealthy food cues are represented more extensively and interactively in children's media environments than healthy ones. However, it is not clear to this date whether children react differently to unhealthy compared to healthy food cues. In an experimental study with 56 children (55.4% girls; M age = 8.00, SD = 1.58), we used eye-tracking to determine children's attention to unhealthy and healthy food cues embedded in a narrative cartoon movie. Besides varying the food type (i.e., healthy vs. unhealthy), we also manipulated the integration levels of food cues with characters (i.e., level of food integration; no interaction vs. handling vs. consumption), and we assessed children's individual susceptibility factors by measuring the impact of their hunger level. Our results indicated that unhealthy food cues attract children's visual attention to a larger extent than healthy cues. However, their initial visual interest did not differ between unhealthy and healthy food cues. Furthermore, an increase in the level of food integration led to an increase in visual attention. Our findings showed no moderating impact of hunger. We conclude that especially unhealthy food cues with an interactive connection trigger cue reactivity in children. Copyright © 2018 Elsevier Ltd. All rights reserved.
Validating Visual Cues In Flight Simulator Visual Displays
NASA Astrophysics Data System (ADS)
Aronson, Moses
1987-09-01
Currently evaluation of visual simulators are performed by either pilot opinion questionnaires or comparison of aircraft terminal performance. The approach here is to compare pilot performance in the flight simulator with a visual display to his performance doing the same visual task in the aircraft as an indication that the visual cues are identical. The A-7 Night Carrier Landing task was selected. Performance measures which had high pilot performance prediction were used to compare two samples of existing pilot performance data to prove that the visual cues evoked the same performance. The performance of four pilots making 491 night landing approaches in an A-7 prototype part task trainer were compared with the performance of 3 pilots performing 27 A-7E carrier landing qualification approaches on the CV-60 aircraft carrier. The results show that the pilots' performances were similar, therefore concluding that the visual cues provided in the simulator were identical to those provided in the real world situation. Differences between the flight simulator's flight characteristics and the aircraft have less of an effect than the pilots individual performances. The measurement parameters used in the comparison can be used for validating the visual display for adequacy for training.
I can see what you are saying: Auditory labels reduce visual search times.
Cho, Kit W
2016-10-01
The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.
A Model of Manual Control with Perspective Scene Viewing
NASA Technical Reports Server (NTRS)
Sweet, Barbara Townsend
2013-01-01
A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).
Zator, Krysten; Katz, Albert N
2017-07-01
Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.
Cai, Weidong; Chen, Tianwen; Ide, Jaime S; Li, Chiang-Shan R; Menon, Vinod
2017-08-01
The ability to anticipate and detect behaviorally salient stimuli is important for virtually all adaptive behaviors, including inhibitory control that requires the withholding of prepotent responses when instructed by external cues. Although right fronto-operculum-insula (FOI), encompassing the anterior insular cortex (rAI) and inferior frontal cortex (rIFC), involvement in inhibitory control is well established, little is known about signaling mechanisms underlying their differential roles in detection and anticipation of salient inhibitory cues. Here we use 2 independent functional magnetic resonance imaging data sets to investigate dynamic causal interactions of the rAI and rIFC, with sensory cortex during detection and anticipation of inhibitory cues. Across 2 different experiments involving auditory and visual inhibitory cues, we demonstrate that primary sensory cortex has a stronger causal influence on rAI than on rIFC, suggesting a greater role for the rAI in detection of salient inhibitory cues. Crucially, a Bayesian prediction model of subjective trial-by-trial changes in inhibitory cue anticipation revealed that the strength of causal influences from rIFC to rAI increased significantly on trials in which participants had higher anticipation of inhibitory cues. Together, these results demonstrate the dissociable bottom-up and top-down roles of distinct FOI regions in detection and anticipation of behaviorally salient cues across multiple sensory modalities. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
An evaluation of two methods for increasing self-initiated verbalizations in autistic children.
Matson, J L; Sevin, J A; Box, M L; Francis, K L; Sevin, B M
1993-01-01
Three children with autism and mental retardation were treated for deficits in self-initiated speech. A novel treatment package employing visual cue fading was compared with a graduated time-delay procedure previously shown to be effective for increasing self-initiated language. Both treatments included training multiple self-initiated verbalizations using multiple therapists and settings. Both treatments were effective, with no differences in measures of acquisition of target phrases, maintenance of behavioral gains, acquisition with additional therapists and settings, and social validity. PMID:8407687
Optimal Appearance Model for Visual Tracking
Wang, Yuru; Jiang, Longkui; Liu, Qiaoyuan; Yin, Minghao
2016-01-01
Many studies argue that integrating multiple cues in an adaptive way increases tracking performance. However, what is the definition of adaptiveness and how to realize it remains an open issue. On the premise that the model with optimal discriminative ability is also optimal for tracking the target, this work realizes adaptiveness and robustness through the optimization of multi-cue integration models. Specifically, based on prior knowledge and current observation, a set of discrete samples are generated to approximate the foreground and background distribution. With the goal of optimizing the classification margin, an objective function is defined, and the appearance model is optimized by introducing optimization algorithms. The proposed optimized appearance model framework is embedded into a particle filter for a field test, and it is demonstrated to be robust against various kinds of complex tracking conditions. This model is general and can be easily extended to other parameterized multi-cue models. PMID:26789639
Díaz-Santos, Mirella; Cao, Bo; Mauro, Samantha A.; Yazdanbakhsh, Arash; Neargarder, Sandy; Cronin-Golomb, Alice
2017-01-01
Parkinson’s disease (PD) and normal aging have been associated with changes in visual perception, including reliance on external cues to guide behavior. This raises the question of the extent to which these groups use visual cues when disambiguating information. Twenty-seven individuals with PD, 23 normal control adults (NC), and 20 younger adults (YA) were presented a Necker cube in which one face was highlighted by thickening the lines defining the face. The hypothesis was that the visual cues would help PD and NC to exert better control over bistable perception. There were three conditions, including passive viewing and two volitional-control conditions (hold one percept in front; and switch: speed up the alternation between the two). In the Hold condition, the cue was either consistent or inconsistent with task instructions. Mean dominance durations (time spent on each percept) under passive viewing were comparable in PD and NC, and shorter in YA. PD and YA increased dominance durations in the Hold cue-consistent condition relative to NC, meaning that appropriate cues helped PD but not NC hold one perceptual interpretation. By contrast, in the Switch condition, NC and YA decreased dominance durations relative to PD, meaning that the use of cues helped NC but not PD in expediting the switch between percepts. Provision of low-level cues has effects on volitional control in PD that are different from in normal aging, and only under task-specific conditions does the use of such cues facilitate the resolution of perceptual ambiguity. PMID:25765890
Simon Effect with and without Awareness of the Accessory Stimulus
ERIC Educational Resources Information Center
Treccani, Barbara; Umilta, Carlo; Tagliabue, Mariaelena
2006-01-01
The authors investigated whether a Simon effect could be observed in an accessory-stimulus Simon task when participants were unaware of the task-irrelevant accessory cue. In Experiment 1A a central visual target was accompanied by a suprathreshold visual lateral cue. A regular Simon effect (i.e., faster cue-response corresponding reaction times…
Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors
ERIC Educational Resources Information Center
Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.
2014-01-01
With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…
Atypical Visual Orienting to Gaze- and Arrow-Cues in Adults with High Functioning Autism
ERIC Educational Resources Information Center
Vlamings, Petra H. J. M.; Stauder, Johannes E. A.; van Son, Ilona A. M.; Mottron, Laurent
2005-01-01
The present study investigates visual orienting to directional cues (arrow or eyes) in adults with high functioning autism (n = 19) and age matched controls (n = 19). A choice reaction time paradigm is used in which eye-or arrow direction correctly (congruent) or incorrectly (incongruent) cues target location. In typically developing participants,…
Integration of visual and motion cues for simulator requirements and ride quality investigation
NASA Technical Reports Server (NTRS)
Young, L. R.
1976-01-01
Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.
Automaticity of phasic alertness: evidence for a three-component model of visual cueing
Lin, Zhicheng; Lu, Zhong-Lin
2017-01-01
The automaticity of phasic alertness is investigated using the attention network test. Results show that the cueing effect from the alerting cue—double cue—is strongly enhanced by the task relevance of visual cues, as determined by the informativeness of the orienting cue—single cue—that is being mixed (80% vs. 50% valid in predicting where the target will appear). Counterintuitively, the cueing effect from the alerting cue can be negatively affected by its visibility, such that masking the cue from awareness can reveal a cueing effect that is otherwise absent when the cue is visible. Evidently, top-down influences—in the form of contextual relevance and cue awareness—can have opposite influences on the cueing effect by the alerting cue. These findings lead us to the view that a visual cue can engage three components of attention—orienting, alerting, and inhibition—to determine the behavioral cueing effect. We propose that phasic alertness, particularly in the form of specific response readiness, is regulated by both internal, top-down expectation and external, bottom-up stimulus properties. In contrast to some existing views, we advance the perspective that phasic alertness is strongly tied to temporal orienting, attentional capture, and spatial orienting. Finally, we discuss how translating attention research to clinical applications would benefit from an improved ability to measure attention. To this end, controlling the degree of intraindividual variability in the attentional components and improving the precision of the measurement tools may prove vital. PMID:27173487
Memory for Drug Related Visual Stimuli in Young Adult, Cocaine Dependent Polydrug Users
Ray, Suchismita; Pandina, Robert; Bates, Marsha E.
2015-01-01
Background and Objectives Implicit (unconscious) and explicit (conscious) memory associations with drugs have been examined primarily using verbal cues. However, drug seeking, drug use behaviors, and relapse in chronic cocaine and other drug users are frequently triggered by viewing substance related visual cues in the environment. We thus examined implicit and explicit memory for drug picture cues to understand the relative extent to which conscious and unconscious memory facilitation of visual drug cues occurs during cocaine dependence. Methods Memory for drug related and neutral picture cues was assessed in 14 inpatient cocaine dependent polydrug users and a comparison group of 21 young adults with limited drug experience (N = 35). Participants completed picture cue exposure, free recall and recognition tasks to assess explicit memory, and a repetition priming task to assess implicit memory. Results Drug cues, compared to neutral cues were better explicitly recalled and implicitly primed, and especially so in the cocaine group. In contrast, neutral cues were better explicitly recognized, and especially in the control group. Conclusion Certain forms of explicit and implicit memory for drug cues were enhanced in cocaine users compared to controls when memory was tested a short time following cue exposure. Enhanced unconscious memory processing of drug cues in chronic cocaine users may be a behavioral manifestation of heightened drug cue salience that supports drug seeking and taking. There may be value in expanding intervention techniques to utilize cocaine users’ implicit memory system. PMID:24588421
Food and conspecific chemical cues modify visual behavior of zebrafish, Danio rerio.
Stephenson, Jessica F; Partridge, Julian C; Whitlock, Kathleen E
2012-06-01
Animals use the different qualities of olfactory and visual sensory information to make decisions. Ethological and electrophysiological evidence suggests that there is cross-modal priming between these sensory systems in fish. We present the first experimental study showing that ecologically relevant chemical mixtures alter visual behavior, using adult male and female zebrafish, Danio rerio. Neutral-density filters were used to attenuate the light reaching the tank to an initial light intensity of 2.3×10(16) photons/s/m2. Fish were exposed to food cue and to alarm cue. The light intensity was then increased by the removal of one layer of filter (nominal absorbance 0.3) every minute until, after 10 minutes, the light level was 15.5×10(16) photons/s/m2. Adult male and female zebrafish responded to a moving visual stimulus at lower light levels if they had been first exposed to food cue, or to conspecific alarm cue. These results suggest the need for more integrative studies of sensory biology.
Mackrous, I; Simoneau, M
2011-11-10
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.
2017-01-01
Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797
Visual tracking of da Vinci instruments for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.
2014-03-01
Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.
Smith, Tim J.; Senju, Atsushi
2017-01-01
While numerous studies have demonstrated that infants and adults preferentially orient to social stimuli, it remains unclear as to what drives such preferential orienting. It has been suggested that the learned association between social cues and subsequent reward delivery might shape such social orienting. Using a novel, spontaneous indication of reinforcement learning (with the use of a gaze contingent reward-learning task), we investigated whether children and adults' orienting towards social and non-social visual cues can be elicited by the association between participants' visual attention and a rewarding outcome. Critically, we assessed whether the engaging nature of the social cues influences the process of reinforcement learning. Both children and adults learned to orient more often to the visual cues associated with reward delivery, demonstrating that cue–reward association reinforced visual orienting. More importantly, when the reward-predictive cue was social and engaging, both children and adults learned the cue–reward association faster and more efficiently than when the reward-predictive cue was social but non-engaging. These new findings indicate that social engaging cues have a positive incentive value. This could possibly be because they usually coincide with positive outcomes in real life, which could partly drive the development of social orienting. PMID:28250186
Working memory enhances visual perception: evidence from signal detection analysis.
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W
2010-03-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.
Liu, Sisi; Liu, Duo; Pan, Zhihui; Xu, Zhengye
2018-03-25
A growing body of research suggests that visual-spatial attention is important for reading achievement. However, few studies have been conducted in non-alphabetic orthographies. This study extended the current research to reading development in Chinese, a logographic writing system known for its visual complexity. Eighty Hong Kong Chinese children were selected and divided into poor reader and typical reader groups, based on their performance on the measures of reading fluency, Chinese character reading, and reading comprehension. The poor and typical readers were matched on age and nonverbal intelligence. A Posner's spatial cueing task was adopted to measure the exogenous and endogenous orienting of visual-spatial attention. Although the typical readers showed the cueing effect in the central cue condition (i.e., responses to targets following valid cues were faster than those to targets following invalid cues), the poor readers did not respond differently in valid and invalid conditions, suggesting an impairment of the endogenous orienting of attention. The two groups, however, showed a similar cueing effect in the peripheral cue condition, indicating intact exogenous orienting in the poor readers. These findings generally supported a link between the orienting of covert attention and Chinese reading, providing evidence for the attentional-deficit theory of dyslexia. Copyright © 2018 John Wiley & Sons, Ltd.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Śmigasiewicz, Kamila; Hasan, Gabriel Sami; Verleger, Rolf
2017-01-01
In dynamically changing environments, spatial attention is not equally distributed across the visual field. For instance, when two streams of stimuli are presented left and right, the second target (T2) is better identified in the left visual field (LVF) than in the right visual field (RVF). Recently, it has been shown that this bias is related to weaker stimulus-driven orienting of attention toward the RVF: The RVF disadvantage was reduced with salient task-irrelevant valid cues and increased with invalid cues. Here we studied if also endogenous orienting of attention may compensate for this unequal distribution of stimulus-driven attention. Explicit information was provided about the location of T1 and T2. Effectiveness of the cue manipulation was confirmed by EEG measures: decreasing alpha power before stream onset with informative cues, earlier latencies of potentials evoked by T1-preceding distractors at the right than at the left hemisphere when T1 was cued left, and decreasing T1- and T2-evoked N2pc amplitudes with informative cues. Importantly, informative cues reduced (though did not completely abolish) the LVF advantage, indicated by improved identification of right T2, and reflected by earlier N2pc latency evoked by right T2 and larger decrease in alpha power after cues indicating right T2. Overall, these results suggest that endogenously driven attention facilitates stimulus-driven orienting of attention toward the RVF, thereby partially overcoming the basic LVF bias in spatial attention.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Zhang, Jin-Tao; Yao, Yuan-Wei; Potenza, Marc N; Xia, Cui-Cui; Lan, Jing; Liu, Lu; Wang, Ling-Jiao; Liu, Ben; Ma, Shan-Shan; Fang, Xiao-Yi
2016-01-01
Internet gaming disorder (IGD) is characterized by high levels of craving for online gaming and related cues. Since addiction-related cues can evoke increased activation in brain areas involved in motivational and reward processing and may engender gaming behaviors or trigger relapse, ameliorating cue-induced craving may be a promising target for interventions for IGD. This study compared neural activation between 40 IGD and 19 healthy control (HC) subjects during an Internet-gaming cue-reactivity task and found that IGD subjects showed stronger activation in multiple brain areas, including the dorsal striatum, brainstem, substantia nigra, and anterior cingulate cortex, but lower activation in the posterior insula. Furthermore, twenty-three IGD subjects (CBI + group) participated in a craving behavioral intervention (CBI) group therapy, whereas the remaining 17 IGD subjects (CBI - group) did not receive any intervention, and all IGD subjects were scanned during similar time intervals. The CBI + group showed decreased IGD severity and cue-induced craving, enhanced activation in the anterior insula and decreased insular connectivity with the lingual gyrus and precuneus after receiving CBI. These findings suggest that CBI is effective in reducing craving and severity in IGD, and it may exert its effects by altering insula activation and its connectivity with regions involved in visual processing and attention bias.
Burckley, Elizabeth; Tincani, Matt; Guld Fisher, Amanda
2015-04-01
To evaluate the iPad 2™ with Book Creator™ software to provide visual cues and video prompting to teach shopping skills in the community to a young adult with an autism spectrum disorder and intellectual disability. A multiple probe across settings design was used to assess effects of the intervention on the participant's independence with following a shopping list in a grocery store across three community locations. Visual cues and video prompting substantially increased the participant's shopping skills within two of the three community locations, skill increases maintained after the intervention was withdrawn, and shopping skills generalized to two untaught shopping items. Social validity surveys suggested that the participant's parent and staff favorably viewed the goals, procedures, and outcomes of intervention. The iPad 2™ with Book Creator™ software may be an effective way to teach independent shopping skills in the community; additional replications are needed.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
Spatial context learning survives interference from working memory load
Vickery, Timothy J.; Sussman, Rachel S.; Jiang, Yuhong V.
2010-01-01
The human visual system is constantly confronted with an overwhelming amount of information, only a subset of which can be processed in complete detail. Attention and implicit learning are two important mechanisms that optimize vision. This study addresses the relationship between these two mechanisms. Specifically we ask: Is implicit learning of spatial context affected by the amount of working memory load devoted to an irrelevant task? We tested observers in visual search tasks where search displays occasionally repeated. Observers became faster searching repeated displays than unrepeated ones, showing contextual cueing. We found that the size of contextual cueing was unaffected by whether observers learned repeated displays under unitary attention or when their attention was divided using working memory manipulations. These results held when working memory was loaded by colors, dot patterns, individual dot locations, or multiple potential targets. We conclude that spatial context learning is robust to interference from manipulations that limit the availability of attention and working memory. PMID:20853996
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis
2009-12-01
We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.
The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.
English, Brittney A; Howard, Ayanna M
2017-07-01
In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.
Visual landmarks facilitate rodent spatial navigation in virtual reality environments
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain areas. Virtual reality offers a unique approach to ask whether visual landmark cues alone are sufficient to improve performance in a spatial task. We found that mice could learn to navigate between two water reward locations along a virtual bidirectional linear track using a spherical treadmill. Mice exposed to a virtual environment with vivid visual cues rendered on a single monitor increased their performance over a 3-d training regimen. Training significantly increased the percentage of time avatars controlled by the mice spent near reward locations in probe trials without water rewards. Neither improvement during training or spatial learning for reward locations occurred with mice operating a virtual environment without vivid landmarks or with mice deprived of all visual feedback. Mice operating the vivid environment developed stereotyped avatar turning behaviors when alternating between reward zones that were positively correlated with their performance on the probe trial. These results suggest that mice are able to learn to navigate to specific locations using only visual cues presented within a virtual environment rendered on a single computer monitor. PMID:22345484
Making the invisible visible: verbal but not visual cues enhance visual detection.
Lupyan, Gary; Spivey, Michael J
2010-07-07
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Orienting attention within visual short-term memory: development and mechanisms.
Shimi, Andria; Nobre, Anna C; Astle, Duncan; Scerif, Gaia
2014-01-01
How does developing attentional control operate within visual short-term memory (VSTM)? Seven-year-olds, 11-year-olds, and adults (total n = 205) were asked to report whether probe items were part of preceding visual arrays. In Experiment 1, central or peripheral cues oriented attention to the location of to-be-probed items either prior to encoding or during maintenance. Cues improved memory regardless of their position, but younger children benefited less from cues presented during maintenance, and these benefits related to VSTM span over and above basic memory in uncued trials. In Experiment 2, cues of low validity eliminated benefits, suggesting that even the youngest children use cues voluntarily, rather than automatically. These findings elucidate the close coupling between developing visuospatial attentional control and VSTM. © 2013 The Authors. Child Development © 2013 Society for Research in Child Development, Inc.
Willander, Johan; Sikström, Sverker; Karlsson, Kristina
2015-01-01
Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.
Hébert, Marie; Bulla, Jan; Vivien, Denis; Agin, Véronique
2017-01-01
Animals use distal and proximal visual cues to accurately navigate in their environment, with the possibility of the occurrence of associative mechanisms such as cue competition as previously reported in honey-bees, rats, birds and humans. In this pilot study, we investigated one of the most common forms of cue competition, namely the overshadowing effect, between visual landmarks during spatial learning in mice. To this end, C57BL/6J × Sv129 mice were given a two-trial place recognition task in a T-maze, based on a novelty free-choice exploration paradigm previously developed to study spatial memory in rodents. As this procedure implies the use of different aspects of the environment to navigate (i.e., mice can perceive from each arm of the maze), we manipulated the distal and proximal visual landmarks during both the acquisition and retrieval phases. Our prospective findings provide a first set of clues in favor of the occurrence of an overshadowing between visual cues during a spatial learning task in mice when both types of cues are of the same modality but at varying distances from the goal. In addition, the observed overshadowing seems to be non-reciprocal, as distal visual cues tend to overshadow the proximal ones when competition occurs, but not vice versa. The results of the present study offer a first insight about the occurrence of associative mechanisms during spatial learning in mice, and may open the way to promising new investigations in this area of research. Furthermore, the methodology used in this study brings a new, useful and easy-to-use tool for the investigation of perceptive, cognitive and/or attentional deficits in rodents. PMID:28634446
Crajé, Céline; Santello, Marco; Gordon, Andrew M
2013-01-01
Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.
2006-08-01
Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition
Visual cues for the retrieval of landmark memories by navigating wood ants.
Harris, Robert A; Graham, Paul; Collett, Thomas S
2007-01-23
Even on short routes, ants can be guided by multiple visual memories. We investigate here the cues controlling memory retrieval as wood ants approach a one- or two-edged landmark to collect sucrose at a point along its base. In such tasks, ants store the desired retinal position of landmark edges at several points along their route. They guide subsequent trips by retrieving the appropriate memory and moving to bring the edges in the scene toward the stored positions. The apparent width of the landmark turns out to be a powerful cue for retrieving the desired retinal position of a landmark edge. Two other potential cues, the landmark's apparent height and the distance that the ant walks, have little effect on memory retrieval. A simple model encapsulates these conclusions and reproduces the ants' routes in several conditions. According to this model, the ant stores a look-up table. Each entry contains the apparent width of the landmark and the desired retinal position of vertical edges. The currently perceived width provides an index for retrieving the associated stored edge positions. The model accounts for the population behavior of ants and the idiosyncratic training routes of individual ants. Our results imply binding between the edge of a shape and its width and, further, imply that assessing the width of a shape does not depend on the presence of any particular local feature, such as a landmark edge. This property makes the ant's retrieval and guidance system relatively robust to edge occlusions.
Flight simulator with spaced visuals
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor); Thurston, Marlin O. (Inventor); Olson, Karl W. (Inventor); Ventola, Ronald W. (Inventor)
1980-01-01
A flight simulator arrangement wherein a conventional, movable base flight trainer is combined with a visual cue display surface spaced a predetermined distance from an eye position within the trainer. Thus, three degrees of motive freedom (roll, pitch and crab) are provided for a visual proprioceptive, and vestibular cue system by the trainer while the remaining geometric visual cue image alterations are developed by a video system. A geometric approach to computing runway image eliminates a need to electronically compute trigonometric functions, while utilization of a line generator and designated vanishing point at the video system raster permits facile development of the images of the longitudinal edges of the runway.
Working memory load and the retro-cue effect: A diffusion model account.
Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S
2018-02-01
Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cue-recruitment for extrinsic signals after training with low information stimuli.
Jain, Anshul; Fuller, Stuart; Backus, Benjamin T
2014-01-01
Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.
Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly
2017-08-16
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
NASA Astrophysics Data System (ADS)
Holt, Marla M.; Insley, Stephen J.; Southall, Brandon L.; Schusterman, Ronald J.
2005-09-01
While attempting to gain access to receptive females, male northern elephant seals form dominance hierarchies through multiple dyadic interactions involving visual and acoustic signals. These signals are both highly stereotyped and directional. Previous behavioral observations suggested that males attend to the directional cues of these signals. We used in situ vocal playbacks to test whether males attend to directional cues of the acoustic components of a competitors calls (i.e., variation in call spectra and source levels). Here, we will focus on playback methodology. Playback calls were multiple exemplars of a marked dominant male from an isolated area, recorded with a directional microphone and DAT recorder and edited into a natural sequence that controlled call amplitude. Control calls were recordings of ambient rookery sounds with the male calls removed. Subjects were 20 marked males (10 adults and 10 subadults) all located at An~o Nuevo, CA. Playback presentations, calibrated for sound-pressure level, were broadcast at a distance of 7 m from each subject. Most responses were classified into the following categories: visual orientation, postural change, calling, movement toward or away from the loudspeaker, and re-directed aggression. We also investigated developmental, hierarchical, and ambient noise variables that were thought to influence male behavior.
Signal enhancement, not active suppression, follows the contingent capture of visual attention.
Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J
2017-02-01
Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sensory mediation of stimulus-driven attentional capture in multiple-cue displays.
Wright, Richard D; Richard, Christian M
2003-08-01
Three location-cuing experiments were conducted in order to examine the stimulus-driven control of attentional capture in multiple-cue displays. These displays consisted of one to four simultaneously presented direct location cues. The results indicated that direct location cuing can produce cue effects that are mediated, in part, by nonattentional processing that occurs simultaneously at multiple locations. When single cues were presented in isolation, however, the resulting cue effect appeared to be due to a combination of sensory processing and attentional capture by the cue. This suggests that the faster responses produced by direct cues may be associated with two different components: an attention-related component that can be modulated by goal-driven factors and a nonattentional component that occurs in parallel at multiple direct-cue locations and is minimally affected by goal-driven factors.
Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter
2018-01-01
Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512
Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.
Chang, Acer Y C; Kanai, Ryota; Seth, Anil K
2015-01-01
Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
Briand, K A; Klein, R M
1987-05-01
In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Toward a New Theory for Selecting Instructional Visuals.
ERIC Educational Resources Information Center
Croft, Richard S.; Burton, John K.
This paper provides a rationale for the selection of illustrations and visual aids for the classroom. The theories that describe the processing of visuals are dual coding theory and cue summation theory. Concept attainment theory offers a basis for selecting which cues are relevant for any learning task which includes a component of identification…
Visual Landmarks Facilitate Rodent Spatial Navigation in Virtual Reality Environments
ERIC Educational Resources Information Center
Youngstrom, Isaac A.; Strowbridge, Ben W.
2012-01-01
Because many different sensory modalities contribute to spatial learning in rodents, it has been difficult to determine whether spatial navigation can be guided solely by visual cues. Rodents moving within physical environments with visual cues engage a variety of nonvisual sensory systems that cannot be easily inhibited without lesioning brain…
ERIC Educational Resources Information Center
Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão
2017-01-01
This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…
DOT National Transportation Integrated Search
1978-03-01
At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Blanchfield, Anthony; Hardy, James; Marcora, Samuele
2014-01-01
The psychobiological model of endurance performance proposes that endurance performance is determined by a decision-making process based on perception of effort and potential motivation. Recent research has reported that effort-based decision-making during cognitive tasks can be altered by non-conscious visual cues relating to affect and action. The effects of these non-conscious visual cues on effort and performance during physical tasks are however unknown. We report two experiments investigating the effects of subliminal priming with visual cues related to affect and action on perception of effort and endurance performance. In Experiment 1 thirteen individuals were subliminally primed with happy or sad faces as they cycled to exhaustion in a counterbalanced and randomized crossover design. A paired t-test (happy vs. sad faces) revealed that individuals cycled significantly longer (178 s, p = 0.04) when subliminally primed with happy faces. A 2 × 5 (condition × iso-time) ANOVA also revealed a significant main effect of condition on rating of perceived exertion (RPE) during the time to exhaustion (TTE) test with lower RPE when subjects were subliminally primed with happy faces (p = 0.04). In Experiment 2, a single-subject randomization tests design found that subliminal priming with action words facilitated a significantly longer TTE (399 s, p = 0.04) in comparison to inaction words. Like Experiment 1, this greater TTE was accompanied by a significantly lower RPE (p = 0.03). These experiments are the first to show that subliminal visual cues relating to affect and action can alter perception of effort and endurance performance. Non-conscious visual cues may therefore influence the effort-based decision-making process that is proposed to determine endurance performance. Accordingly, the findings raise notable implications for individuals who may encounter such visual cues during endurance competitions, training, or health related exercise. PMID:25566014
Visual search by chimpanzees (Pan): assessment of controlling relations.
Tomonaga, M
1995-01-01
Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai. PMID:7714449
Graci, Valentina
2011-10-01
It has been previously suggested that coupled upper and limb movements need visuomotor coordination to be achieved. Previous studies have not investigated the role that visual cues may play in the coordination of locomotion and prehension. The aim of this study was to investigate if lower peripheral visual cues provide online control of the coordination of locomotion and prehension as they have been showed to do during adaptive gait and level walking. Twelve subjects reached a semi-empty or a full glass with their dominant or non-dominant hand at gait termination. Two binocular visual conditions were investigated: normal vision and lower visual occlusion. Outcome measures were determined using 3D motion capture techniques. Results showed that although the subjects were able to successfully complete the task without spilling the water from the glass under lower visual occlusion, they increased the margin of safety between final foot placements and glass. These findings suggest that lower visual cues are mainly used online to fine tune the trajectory of the upper and lower limbs moving toward the target. Copyright © 2011 Elsevier B.V. All rights reserved.
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.
McDaniel, Jena; Camarata, Stephen; Yoder, Paul
2018-05-15
Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.
ERIC Educational Resources Information Center
Lewkowicz, David J.
2003-01-01
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
Albert (Bud) Mayfield; Cavell Brownie
2013-01-01
The redbay ambrosia beetle (Syleborus glabratus Eichhoff) is an invasive pest and vector of the pathogen that causes laurel wilt disease in Lauraceous tree species in the eastern United States. This insect uses olfactory cues during host finding, but use of visual cues by X. Glabratus has not been previously investigated and may help explain diameter...
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
All I saw was the cake. Hunger effects on attentional capture by visual food cues.
Piech, Richard M; Pastorino, Michael T; Zald, David H
2010-06-01
While effects of hunger on motivation and food reward value are well-established, far less is known about the effects of hunger on cognitive processes. Here, we deployed the emotional blink of attention paradigm to investigate the impact of visual food cues on attentional capture under conditions of hunger and satiety. Participants were asked to detect targets which appeared in a rapid visual stream after different types of task irrelevant distractors. We observed that food stimuli acquired increased power to capture attention and prevent target detection when participants were hungry. This occurred despite monetary incentives to perform well. Our findings suggest an attentional mechanism through which hunger heightens perception of food cues. As an objective behavioral marker of the attentional sensitivity to food cues, the emotional attentional blink paradigm may provide a useful technique for studying individual differences, and state manipulations in the sensitivity to food cues. Published by Elsevier Ltd.
Cues used by the black fly, Simulium annulus, for attraction to the common loon (Gavia immer).
Weinandt, Meggin L; Meyer, Michael; Strand, Mac; Lindsay, Alec R
2012-12-01
The parasitic relationship between a black fly, Simulium annulus, and the common loon (Gavia immer) has been considered one of the most exclusive relationships between any host species and a black fly species. To test the host specificity of this blood-feeding insect, we made a series of bird decoy presentations to black flies on loon-inhabited lakes in northern Wisconsin, U.S.A. To examine the importance of chemical and visual cues for black fly detection of and attraction to hosts, we made decoy presentations with and without chemical cues. Flies attracted to the decoys were collected, identified to species, and quantified. Results showed that S. annulus had a strong preference for common loon visual and chemical cues, although visual cues from Canada geese (Branta canadensis) and mallards (Anas platyrynchos) did attract some flies in significantly smaller numbers. © 2012 The Society for Vector Ecology.
Working memory can enhance unconscious visual perception.
Pan, Yi; Cheng, Qiu-Ping; Luo, Qian-Ying
2012-06-01
We demonstrate that unconscious processing of a stimulus property can be enhanced when there is a match between the contents of working memory and the stimulus presented in the visual field. Participants first held a cue (a colored circle) in working memory and then searched for a brief masked target shape presented simultaneously with a distractor shape. When participants reported having no awareness of the target shape at all, search performance was more accurate in the valid condition, where the target matched the cue in color, than in the neutral condition, where the target mismatched the cue. This effect cannot be attributed to bottom-up perceptual priming from the presentation of a memory cue, because unconscious perception was not enhanced when the cue was merely perceptually identified but not actively held in working memory. These findings suggest that reentrant feedback from the contents of working memory modulates unconscious visual perception.
The Effect of Eye Contact Is Contingent on Visual Awareness
Xu, Shan; Zhang, Shen; Geng, Haiyan
2018-01-01
The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1) or subliminal (Experiment 2) eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition. PMID:29467703
Goldberg, Melissa C; Mostow, Allison J; Vecera, Shaun P; Larson, Jennifer C Gidley; Mostofsky, Stewart H; Mahone, E Mark; Denckla, Martha B
2008-09-01
We examined the ability to use static line drawings of eye gaze cues to orient visual-spatial attention in children with high functioning autism (HFA) compared to typically developing children (TD). The task was organized such that on valid trials, gaze cues were directed toward the same spatial location as the appearance of an upcoming target, while on invalid trials gaze cues were directed to an opposite location. Unlike TD children, children with HFA showed no advantage in reaction time (RT) on valid trials compared to invalid trials (i.e., no significant validity effect). The two stimulus onset asynchronies (200 ms, 700 ms) did not differentially affect these findings. The results suggest that children with HFA show impairments in utilizing static line drawings of gaze cues to orient visual-spatial attention.
Food Avoidance Learning in Squirrel Monkeys and Common Marmosets
Laska, Matthias; Metzker, Karin
1998-01-01
Using a conditioned food avoidance learning paradigm, six squirrel monkeys (Saimiri sciureus) and six common marmosets (Callithrix jacchus) were tested for their ability to (1) reliably form associations between visual or olfactory cues of a potential food and its palatability and (2) remember such associations over prolonged periods of time. We found (1) that at the group level both species showed one-trial learning with the visual cues color and shape, whereas only the marmosets were able to do so with the olfactory cue, (2) that all individuals from both species learned to reliably avoid the unpalatable food items within 10 trials, (3) a tendency in both species for quicker acquisition of the association with the visual cues compared with the olfactory cue, (4) a tendency for quicker acquisition and higher reliability of the aversion by the marmosets compared with the squirrel monkeys, and (5) that all individuals from both species were able to reliably remember the significance of the visual cues, color and shape, even after 4 months, whereas only the marmosets showed retention of the significance of the olfactory cues for up to 4 weeks. Furthermore, the results suggest that in both species tested, illness is not a necessary prerequisite for food avoidance learning but that the presumably innate rejection responses toward highly concentrated but nontoxic bitter and sour tastants are sufficient to induce robust learning and retention. PMID:10454364
Precuing benefits for color and location in a visual search task.
Vierck, Esther; Miller, Jeff
2008-02-01
Selection in multiple-item displays has been shown to benefit immensely from advance knowledge of target location (e.g., Henderson, 1991), leading to the suggestion that location is completely dominant in visual selective attention (e.g., Tsal & Lavie, 1993). Recently, direct selection by color has been reported in displays in which location does not vary (Vierck & Miller, 2005). The present experiment investigated the possibility of independent selection by color in a task with multiple-item displays and location precues in order to see whether color is also used for selection even when target location does vary and supposedly dominant location precues can be used. Precues provided independent information about the location and color of a target, and each type of precue could be either valid or invalid. The precues were followed by brief displays of six letters in six different colors, and participants had to discriminate the case of a prespecified target letter (e.g., R vs. r). Performance was much better when location cues were valid than when they were invalid, confirming the large advantage associated with valid advance location information. Performance was also better with valid advance color information, however, both when location cues were valid and when they were invalid. But these color benefits were dependent on the closeness of the colored letter to the cued location. Our results thus suggest that selection by color in a multiple-item display, where location and color information are independent from each other and equalized, is mediated by location information.
Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.
Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R
2014-01-01
Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.
Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection
Lupyan, Gary; Spivey, Michael J.
2010-01-01
Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646
NASA Technical Reports Server (NTRS)
Young, L. R.
1976-01-01
Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.
Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H
2012-05-10
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
A comparison of visuomotor cue integration strategies for object placement and prehension.
Greenwald, Hal S; Knill, David C
2009-01-01
Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.
Colour processing in complex environments: insights from the visual system of bees
Dyer, Adrian G.; Paulk, Angelique C.; Reser, David H.
2011-01-01
Colour vision enables animals to detect and discriminate differences in chromatic cues independent of brightness. How the bee visual system manages this task is of interest for understanding information processing in miniaturized systems, as well as the relationship between bee pollinators and flowering plants. Bees can quickly discriminate dissimilar colours, but can also slowly learn to discriminate very similar colours, raising the question as to how the visual system can support this, or whether it is simply a learning and memory operation. We discuss the detailed neuroanatomical layout of the brain, identify probable brain areas for colour processing, and suggest that there may be multiple systems in the bee brain that mediate either coarse or fine colour discrimination ability in a manner dependent upon individual experience. These multiple colour pathways have been identified along both functional and anatomical lines in the bee brain, providing us with some insights into how the brain may operate to support complex colour discrimination behaviours. PMID:21147796
Lee, Sungkyoung; Cappella, Joseph N.
2014-01-01
Findings from previous studies on smoking cues and argument strength in antismoking messages have shown that the presence of smoking cues undermines the persuasiveness of antismoking public service announcements (PSAs) with weak arguments. This study conceptualized smoking cues (i.e., scenes showing smoking-related objects and behaviors) as stimuli motivationally relevant to the former smoker population and examined how smoking cues influence former smokers’ processing of antismoking PSAs. Specifically, by defining smoking cues and the strength of antismoking arguments in terms of resource allocation, this study examined former smokers’ recognition accuracy, memory strength, and memory judgment of visual (i.e., scenes excluding smoking cues) and audio information from antismoking PSAs. In line with previous findings, the results of the study showed that the presence of smoking cues undermined former smokers’ encoding of antismoking arguments, which includes the visual and audio information that compose the main content of antismoking messages. PMID:25477766
Do cattle (Bos taurus) retain an association of a visual cue with a food reward for a year?
Hirata, Masahiko; Takeno, Nozomi
2014-06-01
Use of visual cues to locate specific food resources from a distance is a critical ability of animals foraging in a spatially heterogeneous environment. However, relatively little is known about how long animals can retain the learned cue-reward association without reinforcement. We compared feeding behavior of experienced and naive Japanese Black cows (Bos taurus) in discovering food locations in a pasture. Experienced animals had been trained to respond to a visual cue (plastic washtub) for a preferred food (grain-based concentrate) 1 year prior to the experiment, while naive animals had no exposure to the cue. Cows were tested individually in a test arena including tubs filled with the concentrate on three successive days (Days 1-3). Experienced cows located the first tub more quickly and visited more tubs than naive cows on Day 1 (usually P < 0.05), but these differences disappeared on Days 2 and 3. The performance of experienced cows tended to increase from Day 1 to Day 2 and level off thereafter. Our results suggest that Japanese Black cows can associate a visual cue with a food reward within a day and retain the association for 1 year despite a slight decay. © 2014 Japanese Society of Animal Science.
Localization Performance of Multiple Vibrotactile Cues on Both Arms.
Wang, Dangxiao; Peng, Cong; Afzal, Naqash; Li, Weiang; Wu, Dong; Zhang, Yuru
2018-01-01
To present information using vibrotactile stimuli in wearable devices, it is fundamental to understand human performance of localizing vibrotactile cues across the skin surface. In this paper, we studied human ability to identify locations of multiple vibrotactile cues activated simultaneously on both arms. Two haptic bands were mounted in proximity to the elbow and shoulder joints on each arm, and two vibrotactile motors were mounted on each band to provide vibration cues to the dorsal and palmar side of the arm. The localization performance under four conditions were compared, with the number of the simultaneously activated cues varying from one to four in each condition. Experimental results illustrate that the rate of correct localization decreases linearly with the increase in the number of activated cues. It was 27.8 percent for three activated cues, and became even lower for four activated cues. An analysis of the correct rate and error patterns show that the layout of vibrotactile cues can have significant effects on the localization performance of multiple vibrotactile cues. These findings might provide guidelines for using vibrotactile cues to guide the simultaneous motion of multiple joints on both arms.
Exogenous temporal cues enhance recognition memory in an object-based manner.
Ohyama, Junji; Watanabe, Katsumi
2010-11-01
Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.
Object based implicit contextual learning: a study of eye movements.
van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel
2011-02-01
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.
Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly
2017-01-01
Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550
van Weert, Julia C M; van Noort, Guda; Bol, Nadine; van Dijk, Liset; Tates, Kiek; Jansen, Jesse
2011-09-01
This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. An experiment using a 2 (complex vs. non-complex language)×3 (text only vs. photograph vs. drawing) factorial design was conducted. In total, 200 respondents without cancer were exposed to one of the six conditions. Respondents were more satisfied with the comprehensibility of both websites when they were presented with a visual cue. A significant interaction effect was found between language complexity and photograph use such that satisfaction with comprehensibility improved when a photograph was added to the complex language condition. Next, an interaction effect was found between age and satisfaction, which indicates that adding a visual cue is more important for older adults than younger adults. Finally, respondents who were exposed to a website with less complex language showed higher recall scores. The use of visual cues enhances satisfaction with the information presented on the website, and the use of non-complex language improves recall. The results of the current study can be used to improve computer-based information systems for patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Working memory dependence of spatial contextual cueing for visual search.
Pollmann, Stefan
2018-05-10
When spatial stimulus configurations repeat in visual search, a search facilitation, resulting in shorter search times, can be observed that is due to incidental learning. This contextual cueing effect appears to be rather implicit, uncorrelated with observers' explicit memory of display configurations. Nevertheless, as I review here, this search facilitation due to contextual cueing depends on visuospatial working memory resources, and it disappears when visuospatial working memory is loaded by a concurrent delayed match to sample task. However, the search facilitation immediately recovers for displays learnt under visuospatial working memory load when this load is removed in a subsequent test phase. Thus, latent learning of visuospatial configurations does not depend on visuospatial working memory, but the expression of learning, as memory-guided search in repeated displays, does. This working memory dependence has also consequences for visual search with foveal vision loss, where top-down controlled visual exploration strategies pose high demands on visuospatial working memory, in this way interfering with memory-guided search in repeated displays. Converging evidence for the contribution of working memory to contextual cueing comes from neuroimaging data demonstrating that distinct cortical areas along the intraparietal sulcus as well as more ventral parieto-occipital cortex are jointly activated by visual working memory and contextual cueing. © 2018 The British Psychological Society.
Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2012-10-25
In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.
Goal-driven modulation of stimulus-driven attentional capture in multiple-cue displays.
Richard, Christian M; Wright, Richard D; Ward, Lawrence M
2003-08-01
Six location-cuing experiments were conducted to examine the goal-driven control of attentional capture in multiple-cue displays. In most of the experiments, the cue display consisted of the simultaneous presentation of a red direct cue that was highly predictive of the target location (the unique cue) and three gray direct cues (the standard cues) that were not predictive of the location. The results indicated that although target responses were faster at all cued locations relative to uncued locations, they were significantly faster at the unique-cue location than at the standard-cue locations. Other results suggest that the faster responses produced by direct cues may be associated with two different components: an attention-related component that can be modulated by goal-driven factors and a nonattentional component that occurs in parallel at multiple direct-cue locations and is minimally affected by the same goal-driven factors.
Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis
ERIC Educational Resources Information Center
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
2010-01-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
Visual cues and listening effort: individual variability.
Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y
2011-10-01
To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.
Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A
2016-06-01
New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.
Floral reward, advertisement and attractiveness to honey bees in dioecious Salix caprea.
Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor
2014-01-01
In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants.
Field Assessment of the Predation Risk - Food Availability Trade-Off in Crab Megalopae Settlement
Tapia-Lewin, Sebastián; Pardo, Luis Miguel
2014-01-01
Settlement is a key process for meroplanktonic organisms as it determines distribution of adult populations. Starvation and predation are two of the main mortality causes during this period; therefore, settlement tends to be optimized in microhabitats with high food availability and low predator density. Furthermore, brachyuran megalopae actively select favorable habitats for settlement, via chemical, visual and/or tactile cues. The main objective in this study was to assess the settlement of Metacarcinus edwardsii and Cancer plebejus under different combinations of food availability levels and predator presence. We determined, in the field, which factor is of greater relative importance when choosing a suitable microhabitat for settling. Passive larval collectors were deployed, crossing different scenarios of food availability and predator presence. We also explore if megalopae actively choose predator-free substrates in response to visual and/or chemical cues. We tested the response to combined visual and chemical cues and to each individually. Data was tested using a two-way factorial design ANOVA. In both species, food did not cause significant effect on settlement success, but predator presence did, therefore there was not trade-off in this case and megalopae respond strongly to predation risk by active aversion. Larvae of M. edwardsii responded to chemical and visual cues simultaneously, but there was no response to either cue by itself. Statistically, C. plebejus did not exhibit a differential response to cues, but reacted with a strong similar tendency as M. edwardsii. We concluded that crab megalopae actively select predator-free microhabitat, independently of food availability, using chemical and visual cues combined. The findings in this study highlight the great relevance of predation on the settlement process and recruitment of marine invertebrates with complex life cycles. PMID:24748151
Floral Reward, Advertisement and Attractiveness to Honey Bees in Dioecious Salix caprea
Dötterl, Stefan; Glück, Ulrike; Jürgens, Andreas; Woodring, Joseph; Aas, Gregor
2014-01-01
In dioecious, zoophilous plants potential pollinators have to be attracted to both sexes and switch between individuals of both sexes for pollination to occur. It often has been suggested that males and females require different numbers of visits for maximum reproductive success because male fertility is more likely limited by access to mates, whereas female fertility is rather limited by resource availability. According to sexual selection theory, males therefore should invest more in pollinator attraction (advertisement, reward) than females. However, our knowledge on the sex specific investment in floral rewards and advertisement, and its effects on pollinator behaviour is limited. Here, we use an approach that includes chemical, spectrophotometric, and behavioural studies i) to elucidate differences in floral nectar reward and advertisement (visual, olfactory cues) in dioecious sallow, Salix caprea, ii) to determine the relative importance of visual and olfactory floral cues in attracting honey bee pollinators, and iii) to test for differential attractiveness of female and male inflorescence cues to honey bees. Nectar amount and sugar concentration are comparable, but sugar composition varies between the sexes. Olfactory sallow cues are more attractive to honey bees than visual cues; however, a combination of both cues elicits the strongest behavioural responses in bees. Male flowers are due to the yellow pollen more colourful and emit a higher amount of scent than females. Honey bees prefer the visual but not the olfactory display of males over those of females. In all, the data of our multifaceted study are consistent with the sexual selection theory and provide novel insights on how the model organism honey bee uses visual and olfactory floral cues for locating host plants. PMID:24676333
Visual cues that are effective for contextual saccade adaptation
Azadi, Reza
2014-01-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429
Joint representation of translational and rotational components of optic flow in parietal cortex
Sunkara, Adhira; DeAngelis, Gregory C.; Angelaki, Dora E.
2016-01-01
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals. PMID:27095846
Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish
Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian
2011-01-01
Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793
Retro-dimension-cue benefit in visual working memory.
Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang
2016-10-24
In visual working memory (VWM) tasks, participants' performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants' performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis.
Retro-dimension-cue benefit in visual working memory
Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang
2016-01-01
In visual working memory (VWM) tasks, participants’ performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants’ performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased probability of reporting the target, but not in the probability of reporting the non-target, as well as increased precision with which this item is remembered. Experiment 2 replicated the retro-dimension-cue benefit and showed that the length of the blank interval after the cue disappeared did not influence recall performance. Experiment 3 replicated the results of Experiment 2 with a lower memory load. Our studies provide evidence that there is a robust retro-dimension-cue benefit in VWM. Participants can use internal attention to flexibly allocate cognitive resources to a particular dimension of memory representations. The results also support the feature-based storing hypothesis. PMID:27774983
Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.
2016-01-01
Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287
Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L
2017-05-01
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Retrospective attention enhances visual working memory in the young but not the old: an ERP study
Duarte, Audrey; Hearons, Patricia; Jiang, Yashu; Delvin, Mary Courtney; Newsome, Rachel N.; Verhaeghen, Paul
2013-01-01
Behavioral evidence from the young suggests spatial cues that orient attention toward task relevant items in visual working memory (VWM) enhance memory capacity. Whether older adults can also use retrospective cues (“retro-cues”) to enhance VWM capacity is unknown. In the current event-related potential (ERP) study, young and old adults performed a VWM task in which spatially informative retro-cues were presented during maintenance. Young but not older adults’ VWM capacity benefitted from retro-cueing. The contralateral delay activity (CDA) ERP index of VWM maintenance was attenuated after the retro-cue, which effectively reduced the impact of memory load. CDA amplitudes were reduced prior to retro-cue onset in the old only. Despite a preserved ability to delete items from VWM, older adults may be less able to use retrospective attention to enhance memory capacity when expectancy of impending spatial cues disrupts effective VWM maintenance. PMID:23445536
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Oh, Hwamee; Leung, Hoi-Chung
2010-02-01
In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue ("Face" or "Scene") was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue ("Both") was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.
Differential responses in dorsal visual cortex to motion and disparity depth cues
Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.
2013-01-01
We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2018-04-25
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.
1996-04-01
This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Lu, Jing; Yang, Hua; He, Hui; Jeon, Seun; Hou, Changyue; Evans, Alan C; Yao, Dezhong
2017-01-01
The multiple-demand (MD) system has proven to be associated with creating structured mental programs in comprehensive behaviors, but the functional mechanisms of this system have not been clarified in the musical domain. In this study, we explored the hypothesis that the MD system is involved in a comprehensive music-related behavior known as musical improvisation. Under a functional magnetic resonance imaging (fMRI) paradigm, 29 composers were recruited to improvise melodies through visual imagery tasks according to familiar and unfamiliar cues. We found that the main regions of the MD system were significantly activated during both musical improvisation conditions. However, only a greater involvement of the intraparietal sulcus (IPS) within the MD system was shown when improvising with unfamiliar cues. Our results revealed that the MD system strongly participated in musical improvisation through processing the novelty of melodies, working memory, and attention. In particular, improvising with unfamiliar cues required more musical transposition manipulations. Moreover, both functional and structural analyses indicated evidence of neuroplasticity in MD regions that could be associated with musical improvisation training. These findings can help unveil the functional mechanisms of the MD system in musical cognition, as well as improve our understanding of musical improvisation.
Lu, Jing; Yang, Hua; He, Hui; Jeon, Seun; Hou, Changyue; Evans, Alan C.; Yao, Dezhong
2017-01-01
The multiple-demand (MD) system has proven to be associated with creating structured mental programs in comprehensive behaviors, but the functional mechanisms of this system have not been clarified in the musical domain. In this study, we explored the hypothesis that the MD system is involved in a comprehensive music-related behavior known as musical improvisation. Under a functional magnetic resonance imaging (fMRI) paradigm, 29 composers were recruited to improvise melodies through visual imagery tasks according to familiar and unfamiliar cues. We found that the main regions of the MD system were significantly activated during both musical improvisation conditions. However, only a greater involvement of the intraparietal sulcus (IPS) within the MD system was shown when improvising with unfamiliar cues. Our results revealed that the MD system strongly participated in musical improvisation through processing the novelty of melodies, working memory, and attention. In particular, improvising with unfamiliar cues required more musical transposition manipulations. Moreover, both functional and structural analyses indicated evidence of neuroplasticity in MD regions that could be associated with musical improvisation training. These findings can help unveil the functional mechanisms of the MD system in musical cognition, as well as improve our understanding of musical improvisation. PMID:29311776
Beck, Valerie M; Hollingworth, Andrew
2017-02-01
The content of visual working memory (VWM) guides attention, but whether this interaction is limited to a single VWM representation or functional for multiple VWM representations is under debate. To test this issue, we developed a gaze-contingent search paradigm to directly manipulate selection history and examine the competition between multiple cue-matching saccade target objects. Participants first saw a dual-color cue followed by two pairs of colored objects presented sequentially. For each pair, participants selectively fixated an object that matched one of the cued colors. Critically, for the second pair, the cued color from the first pair was presented either with a new distractor color or with the second cued color. In the latter case, if two cued colors in VWM interact with selection simultaneously, we expected the second cued color object to generate substantial competition for selection, even though the first cued color was used to guide attention in the immediately previous pair. Indeed, in the second pair, selection probability of the first cued color was substantially reduced in the presence of the second cued color. This competition between cue-matching objects provides strong evidence that both VWM representations interacted simultaneously with selection. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Srinivasan, Ravindra J.; Massaro, Dominic W.
2003-01-01
Examined the processing of potential auditory and visual cues that differentiate statements from echoic questions. Found that both auditory and visual cues reliably conveyed statement and question intonation, were successfully synthesized, and generalized to other utterances. (Author/VWL)
Vestibular-visual interactions in flight simulators
NASA Technical Reports Server (NTRS)
Clark, B.
1977-01-01
The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.
Cue-induced brain activity in pathological gamblers.
Crockford, David N; Goodyear, Bradley; Edwards, Jodi; Quickfall, Jeremy; el-Guebaly, Nady
2005-11-15
Previous studies using functional magnetic resonance imaging (fMRI) have identified differential brain activity in healthy subjects performing gambling tasks and in pathological gambling (PG) subjects when exposed to motivational and emotional predecessors for gambling as well as during gambling or response inhibition tasks. The goal of the present study was to determine if PG subjects exhibit differential brain activity when exposed to visual gambling cues. Ten male DSM-IV-TR PG subjects and 10 matched healthy control subjects underwent fMRI during visual presentations of gambling-related video alternating with video of nature scenes. Pathological gambling subjects and control subjects exhibited overlap in areas of brain activity in response to the visual gambling cues; however, compared with control subjects, PG subjects exhibited significantly greater activity in the right dorsolateral prefrontal cortex (DLPFC), including the inferior and medial frontal gyri, the right parahippocampal gyrus, and left occipital cortex, including the fusiform gyrus. Pathological gambling subjects also reported a significant increase in mean craving for gambling after the study. Post hoc analyses revealed a dissociation in visual processing stream (dorsal vs. ventral) activation by subject group and cue type. These findings may represent a component of cue-induced craving for gambling or conditioned behavior that could underlie pathological gambling.
Depth reversals in stereoscopic displays driven by apparent size
NASA Astrophysics Data System (ADS)
Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.
1998-04-01
In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.
Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.
2007-01-01
Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.
Modeling human perception and estimation of kinematic responses during aircraft landing
NASA Technical Reports Server (NTRS)
Schmidt, David K.; Silk, Anthony B.
1988-01-01
The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.
Age-related changes in event-cued visual and auditory prospective memory proper.
Uttl, Bob
2006-06-01
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.
Deployment of spatial attention to words in central and peripheral vision.
Ducrot, Stéphanie; Grainger, Jonathan
2007-05-01
Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.
Setting and changing feature priorities in visual short-term memory.
Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin
2017-04-01
Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.
Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.
Loria, Tristan; de Grosbois, John; Tremblay, Luc
2016-09-01
At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Pitts, Brandon J; Sarter, Nadine
2018-06-01
Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.
Motor (but not auditory) attention affects syntactic choice.
Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy
2018-01-01
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Neural coding underlying the cue preference for celestial orientation
el Jundi, Basil; Warrant, Eric J.; Byrne, Marcus J.; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie
2015-01-01
Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity. PMID:26305929
Neural coding underlying the cue preference for celestial orientation.
el Jundi, Basil; Warrant, Eric J; Byrne, Marcus J; Khaldy, Lana; Baird, Emily; Smolka, Jochen; Dacke, Marie
2015-09-08
Diurnal and nocturnal African dung beetles use celestial cues, such as the sun, the moon, and the polarization pattern, to roll dung balls along straight paths across the savanna. Although nocturnal beetles move in the same manner through the same environment as their diurnal relatives, they do so when light conditions are at least 1 million-fold dimmer. Here, we show, for the first time to our knowledge, that the celestial cue preference differs between nocturnal and diurnal beetles in a manner that reflects their contrasting visual ecologies. We also demonstrate how these cue preferences are reflected in the activity of compass neurons in the brain. At night, polarized skylight is the dominant orientation cue for nocturnal beetles. However, if we coerce them to roll during the day, they instead use a celestial body (the sun) as their primary orientation cue. Diurnal beetles, however, persist in using a celestial body for their compass, day or night. Compass neurons in the central complex of diurnal beetles are tuned only to the sun, whereas the same neurons in the nocturnal species switch exclusively to polarized light at lunar light intensities. Thus, these neurons encode the preferences for particular celestial cues and alter their weighting according to ambient light conditions. This flexible encoding of celestial cue preferences relative to the prevailing visual scenery provides a simple, yet effective, mechanism for enabling visual orientation at any light intensity.
Ogden, Ruth S; Jones, Luke A
2009-05-01
The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.
Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.
Schankin, Andrea; Schubö, Anna
2009-05-01
Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.
Central and peripheral vision loss differentially affects contextual cueing in visual search.
Geringswald, Franziska; Pollmann, Stefan
2015-09-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for controls who had searched without scotoma simulation throughout the experiment. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. We discuss the role of visuospatial working memory load as source of this interference. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma. (c) 2015 APA, all rights reserved).
The Role of Color in Search Templates for Real-world Target Objects.
Nako, Rebecca; Smith, Tim J; Eimer, Martin
2016-11-01
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
Lower region: a new cue for figure-ground assignment.
Vecera, Shaun P; Vogel, Edward K; Woodman, Geoffrey F
2002-06-01
Figure-ground assignment is an important visual process; humans recognize, attend to, and act on figures, not backgrounds. There are many visual cues for figure-ground assignment. A new cue to figure-ground assignment, called lower region, is presented: Regions in the lower portion of a stimulus array appear more figurelike than regions in the upper portion of the display. This phenomenon was explored, and it was demonstrated that the lower-region preference is not influenced by contrast, eye movements, or voluntary spatial attention. It was found that the lower region is defined relative to the stimulus display, linking the lower-region preference to pictorial depth perception cues. The results are discussed in terms of the environmental regularities that this new figure-ground cue may reflect.
Takao, Saki; Yamani, Yusuke; Ariga, Atsunori
2018-01-01
The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457
Takao, Saki; Yamani, Yusuke; Ariga, Atsunori
2017-01-01
The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.
Sensory modality of smoking cues modulates neural cue reactivity.
Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J
2013-01-01
Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.
Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.
Saiki, Jun; Miyatsuji, Hirofumi
2009-03-23
Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.
Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.
2014-01-01
Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110
Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L
2014-01-01
Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Dale, Corby L; Simpson, Gregory V; Foxe, John J; Luks, Tracy L; Worden, Michael S
2008-06-01
Brain-based models of visual attention hypothesize that attention-related benefits afforded to imperative stimuli occur via enhancement of neural activity associated with relevant spatial and non-spatial features. When relevant information is available in advance of a stimulus, anticipatory deployment processes are likely to facilitate allocation of attention to stimulus properties prior to its arrival. The current study recorded EEG from humans during a centrally-cued covert attention task. Cues indicated relevance of left or right visual field locations for an upcoming motion or orientation discrimination. During a 1 s delay between cue and S2, multiple attention-related events occurred at frontal, parietal and occipital electrode sites. Differences in anticipatory activity associated with the non-spatial task properties were found late in the delay, while spatially-specific modulation of activity occurred during both early and late periods and continued during S2 processing. The magnitude of anticipatory activity preceding the S2 at frontal scalp sites (and not occipital) was predictive of the magnitude of subsequent selective attention effects on the S2 event-related potentials observed at occipital electrodes. Results support the existence of multiple anticipatory attention-related processes, some with differing specificity for spatial and non-spatial task properties, and the hypothesis that levels of activity in anterior areas are important for effective control of subsequent S2 selective attention.
Luiselli, J K
2000-07-01
A 3-year-old child with multiple medical disorders and chronic food refusal was treated successfully using a program that incorporated antecedent control procedures combined with positive reinforcement. The antecedent manipulations included visual cueing of a criterion number of self-feeding responses that were required during meals to receive reinforcement and a gradual increase in the imposed criterion (demand fading) that was based on improved frequency of oral consumption. As evaluated in a changing criterion design, the child learned to feed himself as an outcome of treatment. One year following intervention, he was consuming a variety of foods and had gained weight. Advantages of antecedent control methods for the treatment of chronic food refusal are discussed.
Short-term visual memory for location in depth: A U-shaped function of time.
Reeves, Adam; Lei, Quan
2017-10-01
Short-term visual memory was studied by displaying arrays of four or five numerals, each numeral in its own depth plane, followed after various delays by an arrow cue shown in one of the depth planes. Subjects reported the numeral at the depth cued by the arrow. Accuracy fell with increasing cue delay for the first 500 ms or so, and then recovered almost fully. This dipping pattern contrasts with the usual iconic decay observed for memory traces. The dip occurred with or without a verbal or color-shape retention load on working memory. In contrast, accuracy did not change with delay when a tonal cue replaced the arrow cue. We hypothesized that information concerning the depths of the numerals decays over time in sensory memory, but that cued recall is aided later on by transfer to a visual memory specialized for depth. This transfer is sufficiently rapid with a tonal cue to compensate for the sensory decay, but it is slowed by the need to tag the arrow cue's depth relative to the depths of the numerals, exposing a dip when sensation has decayed and transfer is not yet complete. A model with a fixed rate of sensory decay and varied transfer rates across individuals captures the dip as well as the cue modality effect.
Eye Contact Is Crucial for Referential Communication in Pet Dogs.
Savalli, Carine; Resende, Briseida; Gaunet, Florence
2016-01-01
Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.
Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.
Gobel, Matthias S; Tufft, Miles R A; Richardson, Daniel C
2018-05-01
We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Object-based spatial attention when objects have sufficient depth cues.
Takeya, Ryuji; Kasai, Tetsuko
2015-01-01
Attention directed to a part of an object tends to obligatorily spread over all of the spatial regions that belong to the object, which may be critical for rapid object-recognition in cluttered visual scenes. Previous studies have generally used simple rectangles as objects and have shown that attention spreading is reflected by amplitude modulation in the posterior N1 component (150-200 ms poststimulus) of event-related potentials, while other interpretations (i.e., rectangular holes) may arise implicitly in early visual processing stages. By using modified Kanizsa-type stimuli that provided less ambiguity of depth ordering, the present study examined early event-related potential spatial-attention effects for connected and separated objects, both of which were perceived in front of (Experiment 1) and in back of (Experiment 2) the surroundings. Typical P1 (100-140 ms) and N1 (150-220 ms) attention effects of ERP in response to unilateral probes were observed in both experiments. Importantly, the P1 attention effect was decreased for connected objects compared to separated objects only in Experiment 1, and the typical object-based modulations of N1 were not observed in either experiment. These results suggest that spatial attention spreads over a figural object at earlier stages of processing than previously indicated, in three-dimensional visual scenes with multiple depth cues.
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Magnitude and duration of cue-induced craving for marijuana in volunteers with cannabis use disorder
Lundahl, Leslie H.; Greenwald, Mark K.
2016-01-01
Aims Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)–related cues in cannabis dependent volunteers. Methods 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. Results In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Conclusions Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. PMID:27436749
Lundahl, Leslie H; Greenwald, Mark K
2016-09-01
Evaluate magnitude and duration of subjective and physiologic responses to neutral and marijuana (MJ)-related cues in cannabis dependent volunteers. 33 volunteers (17 male) who met DSM-IV criteria for Cannabis Abuse or Dependence were exposed to neutral (first) then MJ-related visual, auditory, olfactory and tactile cues. Mood, drug craving and physiology were assessed at baseline, post-neutral, post-MJ and 15-min post MJ cue exposure to determine magnitude of cue- responses. For a subset of participants (n=15; 9 male), measures of craving and physiology were collected also at 30-, 90-, and 150-min post-MJ cue to examine duration of cue-effects. In cue-response magnitude analyses, visual analog scale (VAS) items craving for, urge to use, and desire to smoke MJ, Total and Compulsivity subscale scores of the Marijuana Craving Questionnaire, anxiety ratings, and diastolic blood pressure (BP) were significantly elevated following MJ vs. neutral cue exposure. In cue-response duration analyses, desire and urge to use MJ remained significantly elevated at 30-, 90- and 150-min post MJ-cue exposure, relative to baseline and neutral cues. Presentation of polysensory MJ cues increased MJ craving, anxiety and diastolic BP relative to baseline and neutral cues. MJ craving remained elevated up to 150-min after MJ cue presentation. This finding confirms that carry-over effects from drug cue presentation must be considered in cue reactivity studies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Freezing of Gait in Parkinson's Disease: An Overload Problem?
Beck, Eric N; Ehgoetz Martens, Kaylena A; Almeida, Quincy J
2015-01-01
Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson's disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers' gait was affected. However, when visual cues were present, freezers' gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG.
Freezing of Gait in Parkinson’s Disease: An Overload Problem?
Beck, Eric N.; Ehgoetz Martens, Kaylena A.; Almeida, Quincy J.
2015-01-01
Freezing of gait (FOG) is arguably the most severe symptom associated with Parkinson’s disease (PD), and often occurs while performing dual tasks or approaching narrowed and cluttered spaces. While it is well known that visual cues alleviate FOG, it is not clear if this effect may be the result of cognitive or sensorimotor mechanisms. Nevertheless, the role of vision may be a critical link that might allow us to disentangle this question. Gaze behaviour has yet to be carefully investigated while freezers approach narrow spaces, thus the overall objective of this study was to explore the interaction between cognitive and sensory-perceptual influences on FOG. In experiment #1, if cognitive load is the underlying factor leading to FOG, then one might expect that a dual-task would elicit FOG episodes even in the presence of visual cues, since the load on attention would interfere with utilization of visual cues. Alternatively, if visual cues alleviate gait despite performance of a dual-task, then it may be more probable that sensory mechanisms are at play. In compliment to this, the aim of experiment#2 was to further challenge the sensory systems, by removing vision of the lower-limbs and thereby forcing participants to rely on other forms of sensory feedback rather than vision while walking toward the narrow space. Spatiotemporal aspects of gait, percentage of gaze fixation frequency and duration, as well as skin conductance levels were measured in freezers and non-freezers across both experiments. Results from experiment#1 indicated that although freezers and non-freezers both walked with worse gait while performing the dual-task, in freezers, gait was relieved by visual cues regardless of whether the cognitive demands of the dual-task were present. At baseline and while dual-tasking, freezers demonstrated a gaze behaviour that neglected the doorway and instead focused primarily on the pathway, a strategy that non-freezers adopted only when performing the dual-task. Interestingly, with the combination of visual cues and dual-task, freezers increased the frequency and duration of fixations toward the doorway, compared to non-freezers. These results suggest that although increasing demand on attention does significantly deteriorate gait in freezers, an increase in cognitive demand is not exclusively responsible for freezing (since visual cues were able to overcome any interference elicited by the dual-task). When vision of the lower limbs was removed in experiment#2, only the freezers’ gait was affected. However, when visual cues were present, freezers’ gait improved regardless of the dual-task. This gait behaviour was accompanied by greater amount of time spent looking at the visual cues irrespective of the dual-task. Since removing vision of the lower-limbs hindered gait even under low attentional demand, restricted sensory feedback may be an important factor to the mechanisms underlying FOG. PMID:26678262
Gagliardo, A.; Odetti, F.; Ioalè, P.
2001-01-01
Whether pigeons use visual landmarks for orientation from familiar locations has been a subject of debate. By recording the directional choices of both anosmic and control pigeons while exiting from a circular arena we were able to assess the relevance of olfactory and visual cues for orientation from familiar sites. When the birds could see the surroundings, both anosmic and control pigeons were homeward oriented. When the view of the landscape was prevented by screens that surrounded the arena, the control pigeons exited from the arena approximately in the home direction, while the anosmic pigeons' distribution was not different from random. Our data suggest that olfactory and visual cues play a critical, but interchangeable, role for orientation at familiar sites. PMID:11571054
Alcohol-cue exposure effects on craving and attentional bias in underage college-student drinkers.
Ramirez, Jason J; Monti, Peter M; Colwill, Ruth M
2015-06-01
The effect of alcohol-cue exposure on eliciting craving has been well documented, and numerous theoretical models assert that craving is a clinically significant construct central to the motivation and maintenance of alcohol-seeking behavior. Furthermore, some theories propose a relationship between craving and attention, such that cue-induced increases in craving bias attention toward alcohol cues, which, in turn, perpetuates craving. This study examined the extent to which alcohol cues induce craving and bias attention toward alcohol cues among underage college-student drinkers. We designed within-subject cue-reactivity and visual-probe tasks to assess in vivo alcohol-cue exposure effects on craving and attentional bias on 39 undergraduate college drinkers (ages 18-20). Participants expressed greater subjective craving to drink alcohol following in vivo cue exposure to a commonly consumed beer compared with water exposure. Furthermore, following alcohol-cue exposure, participants exhibited greater attentional biases toward alcohol cues as measured by a visual-probe task. In addition to the cue-exposure effects on craving and attentional bias, within-subject differences in craving across sessions marginally predicted within-subject differences in attentional bias. Implications for both theory and practice are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
A bilateral advantage for maintaining objects in visual short term memory.
Holt, Jessica L; Delvenne, Jean-François
2015-01-01
Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754-763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127-133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1s, 3s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. Copyright © 2014 Elsevier B.V. All rights reserved.
Plotnik, Joshua M.; Pokorny, Jennifer J.; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F.; Hennessy, Alice; Hill, James; Hill, Virginia J.; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L.; Morrison, Violet M. B.; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K.; Ylli, Dora; Clayton, Nicola S.; Roberts, John; Fure, Emilie L.; Duchatelier, Alicia P.; Getz, David
2013-01-01
Recent research suggests that domesticated species – due to artificial selection by humans for specific, preferred behavioral traits – are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants’ inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation. PMID:23613804
Plotnik, Joshua M; Pokorny, Jennifer J; Keratimanochaya, Titiporn; Webb, Christine; Beronja, Hana F; Hennessy, Alice; Hill, James; Hill, Virginia J; Kiss, Rebecca; Maguire, Caitlin; Melville, Beckett L; Morrison, Violet M B; Seecoomar, Dannah; Singer, Benjamin; Ukehaxhaj, Jehona; Vlahakis, Sophia K; Ylli, Dora; Clayton, Nicola S; Roberts, John; Fure, Emilie L; Duchatelier, Alicia P; Getz, David
2013-01-01
Recent research suggests that domesticated species--due to artificial selection by humans for specific, preferred behavioral traits--are better than wild animals at responding to visual cues given by humans about the location of hidden food. \\Although this seems to be supported by studies on a range of domesticated (including dogs, goats and horses) and wild (including wolves and chimpanzees) animals, there is also evidence that exposure to humans positively influences the ability of both wild and domesticated animals to follow these same cues. Here, we test the performance of Asian elephants (Elephas maximus) on an object choice task that provides them with visual-only cues given by humans about the location of hidden food. Captive elephants are interesting candidates for investigating how both domestication and human exposure may impact cue-following as they represent a non-domesticated species with almost constant human interaction. As a group, the elephants (n = 7) in our study were unable to follow pointing, body orientation or a combination of both as honest signals of food location. They were, however, able to follow vocal commands with which they were already familiar in a novel context, suggesting the elephants are able to follow cues if they are sufficiently salient. Although the elephants' inability to follow the visual cues provides partial support for the domestication hypothesis, an alternative explanation is that elephants may rely more heavily on other sensory modalities, specifically olfaction and audition. Further research will be needed to rule out this alternative explanation.
Dynamics of the spatial scale of visual attention revealed by brain event-related potentials
NASA Technical Reports Server (NTRS)
Luo, Y. J.; Greenwood, P. M.; Parasuraman, R.
2001-01-01
The temporal dynamics of the spatial scaling of attention during visual search were examined by recording event-related potentials (ERPs). A total of 16 young participants performed a search task in which the search array was preceded by valid cues that varied in size and hence in precision of target localization. The effects of cue size on short-latency (P1 and N1) ERP components, and the time course of these effects with variation in cue-target stimulus onset asynchrony (SOA), were examined. Reaction time (RT) to discriminate a target was prolonged as cue size increased. The amplitudes of the posterior P1 and N1 components of the ERP evoked by the search array were affected in opposite ways by the size of the precue: P1 amplitude increased whereas N1 amplitude decreased as cue size increased, particularly following the shortest SOA. The results show that when top-down information about the region to be searched is less precise (larger cues), RT is slowed and the neural generators of P1 become more active, reflecting the additional computations required in changing the spatial scale of attention to the appropriate element size to facilitate target discrimination. In contrast, the decrease in N1 amplitude with cue size may reflect a broadening of the spatial gradient of attention. The results provide electrophysiological evidence that changes in the spatial scale of attention modulate neural activity in early visual cortical areas and activate at least two temporally overlapping component processes during visual search.
Proximal versus distal cue utilization in spatial navigation: the role of visual acuity?
Carman, Heidi M; Mactutus, Charles F
2002-09-01
Proximal versus distal cue use in the Morris water maze is a widely accepted strategy for the dissociation of various problems affecting spatial navigation in rats such as aging, head trauma, lesions, and pharmacological or hormonal agents. Of the limited number of ontogenetic rat studies conducted, the majority have approached the problem of preweanling spatial navigation through a similar proximal-distal dissociation. An implicit assumption among all of these studies has been that the animal's visual system is sufficient to permit robust spatial navigation. We challenged this assumption and have addressed the role of visual acuity in spatial navigation in the preweanling Fischer 344-N rat by training animals to locate a visible (proximal) or hidden (distal) platform using double or null extramaze cues within the testing environment. All pups demonstrated improved performance across training, but animals presented with a visible platform, regardless of extramaze cues, simultaneously reached asymptotic performance levels; animals presented with a hidden platform, dependent upon location of extramaze cues, differentially reached asymptotic performance levels. Probe trial performance, defined by quadrant time and platform crossings, revealed that distal-double-cue pups demonstrated spatial navigational ability superior to that of the remaining groups. These results suggest that a pup's ability to spatially navigate a hidden platform is dependent on not only its response repertoire and task parameters, but also its visual acuity, as determined by the extramaze cue location within the testing environment. The standard hidden versus visible platform dissociation may not be a satisfactory strategy for the control of potential sensory deficits.
ERIC Educational Resources Information Center
Gawryszewski, Luiz G.; Carreiro, Luiz Renato R.; Magalhaes, Fabio V.
2005-01-01
A non-informative cue (C) elicits an inhibition of manual reaction time (MRT) to a visual target (T). We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7[degrees]) positions, the C-T angular distances (in polar…
Visual attention to food cues in obesity: an eye-tracking study.
Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M
2014-12-01
Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.
Gharat, Amol; Baker, Curtis L
2017-01-25
Many of the neurons in early visual cortex are selective for the orientation of boundaries defined by first-order cues (luminance) as well as second-order cues (contrast, texture). The neural circuit mechanism underlying this selectivity is still unclear, but some studies have proposed that it emerges from spatial nonlinearities of subcortical Y cells. To understand how inputs from the Y-cell pathway might be pooled to generate cue-invariant receptive fields, we recorded visual responses from single neurons in cat Area 18 using linear multielectrode arrays. We measured responses to drifting and contrast-reversing luminance gratings as well as contrast modulation gratings. We found that a large fraction of these neurons have nonoriented responses to gratings, similar to those of subcortical Y cells: they respond at the second harmonic (F2) to high-spatial frequency contrast-reversing gratings and at the first harmonic (F1) to low-spatial frequency drifting gratings ("Y-cell signature"). For a given neuron, spatial frequency tuning for linear (F1) and nonlinear (F2) responses is quite distinct, similar to orientation-selective cue-invariant neurons. Also, these neurons respond to contrast modulation gratings with selectivity for the carrier (texture) spatial frequency and, in some cases, orientation. Their receptive field properties suggest that they could serve as building blocks for orientation-selective cue-invariant neurons. We propose a circuit model that combines ON- and OFF-center cortical Y-like cells in an unbalanced push-pull manner to generate orientation-selective, cue-invariant receptive fields. A significant fraction of neurons in early visual cortex have specialized receptive fields that allow them to selectively respond to the orientation of boundaries that are invariant to the cue (luminance, contrast, texture, motion) that defines them. However, the neural mechanism to construct such versatile receptive fields remains unclear. Using multielectrode recording, we found a large fraction of neurons in early visual cortex with receptive fields not selective for orientation that have spatial nonlinearities like those of subcortical Y cells. These are strong candidates for building cue-invariant orientation-selective neurons; we present a neural circuit model that pools such neurons in an imbalanced "push-pull" manner, to generate orientation-selective cue-invariant receptive fields. Copyright © 2017 the authors 0270-6474/17/370998-16$15.00/0.
Crossmodal and Incremental Perception of Audiovisual Cues to Emotional Speech
ERIC Educational Resources Information Center
Barkhuysen, Pashiera; Krahmer, Emiel; Swerts, Marc
2010-01-01
In this article we report on two experiments about the perception of audiovisual cues to emotional speech. The article addresses two questions: (1) how do visual cues from a speaker's face to emotion relate to auditory cues, and (2) what is the recognition speed for various facial cues to emotion? Both experiments reported below are based on tests…
Visual cues that are effective for contextual saccade adaptation.
Azadi, Reza; Harwood, Mark R
2014-06-01
The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.
Motion cue effects on human pilot dynamics in manual control
NASA Technical Reports Server (NTRS)
Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.
1977-01-01
Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
Hollands, Kristen L; Pelton, Trudy A; Wimperis, Andrew; Whitham, Diane; Tan, Wei; Jowett, Sue; Sackley, Catherine M; Wing, Alan M; Tyson, Sarah F; Mathias, Jonathan; Hensman, Marianne; van Vliet, Paulette M
2015-01-01
Given the importance of vision in the control of walking and evidence indicating varied practice of walking improves mobility outcomes, this study sought to examine the feasibility and preliminary efficacy of varied walking practice in response to visual cues, for the rehabilitation of walking following stroke. This 3 arm parallel, multi-centre, assessor blind, randomised control trial was conducted within outpatient neurorehabilitation services. Community dwelling stroke survivors with walking speed <0.8m/s, lower limb paresis and no severe visual impairments. Over-ground visual cue training (O-VCT), Treadmill based visual cue training (T-VCT), and Usual care (UC) delivered by physiotherapists twice weekly for 8 weeks. Participants were randomised using computer generated random permutated balanced blocks of randomly varying size. Recruitment, retention, adherence, adverse events and mobility and balance were measured before randomisation, post-intervention and at four weeks follow-up. Fifty-six participants participated (18 T-VCT, 19 O-VCT, 19 UC). Thirty-four completed treatment and follow-up assessments. Of the participants that completed, adherence was good with 16 treatments provided over (median of) 8.4, 7.5 and 9 weeks for T-VCT, O-VCT and UC respectively. No adverse events were reported. Post-treatment improvements in walking speed, symmetry, balance and functional mobility were seen in all treatment arms. Outpatient based treadmill and over-ground walking adaptability practice using visual cues are feasible and may improve mobility and balance. Future studies should continue a carefully phased approach using identified methods to improve retention. Clinicaltrials.gov NCT01600391.
Heuristics of reasoning and analogy in children's visual perspective taking.
Yaniv, I; Shatz, M
1990-10-01
We propose that children's reasoning about others' visual perspectives is guided by simple heuristics based on a perceiver's line of sight and salient features of the object met by that line. In 3 experiments employing a 2-perceiver analogy task, children aged 3-6 were generally better able to reproduce a perceiver's perspective if a visual cue in the perceiver's line of sight sufficed to distinguish it from alternatives. Children had greater difficulty when the task hinged on attending to configural cues. Availability of distinctive cues affixed on the objects' sides facilitated solution of the symmetrical orientations. These and several other related findings reported in the literature are traced to children's reliance on heuristics of reasoning.
Inhibition of return shortens perceived duration of a brief visual event.
Osugi, Takayuki; Takeda, Yuji; Murakami, Ikuya
2016-11-01
We investigated the influence of attentional inhibition on the perceived duration of a brief visual event. Although attentional capture by an exogenous cue is known to prolong the perceived duration of an attended visual event, it remains unclear whether time perception is also affected by subsequent attentional inhibition at the location previously cued by an exogenous cue, an attentional phenomenon known as inhibition of return. In this study, we combined spatial cuing and duration judgment. After one second from the appearance of an uninformative peripheral cue either to the left or to the right, a target appeared at a cued side in one-third of the trials, which indeed yielded inhibition of return, and at the opposite side in another one-third of the trials. In the remaining trials, a cue appeared at a central box and one second later, a target appeared at either the left or right side. The target at the previously cued location was perceived to last shorter than the target presented at the opposite location, and shorter than the target presented after the central cue presentation. Therefore, attentional inhibition produced by a classical paradigm of inhibition of return decreased the perceived duration of a brief visual event. Copyright © 2016 Elsevier Ltd. All rights reserved.
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
2016-02-01
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
Harrison, Stephanie L; Laver, Kate E; Ninnis, Kayla; Rowett, Cherie; Lannin, Natasha A; Crotty, Maria
2018-03-09
To examine in people with neurological disorders, which method/s of providing external cues to improve task performance are most effective. Medline, EMBASE, and PsycINFO were systematically searched. Two reviewers independently screened, extracted data, and assessed the quality of the evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE). Twenty six studies were included. Studies examined a wide-range of cues including visual, tactile, auditory, verbal, and multi-component cues. Cueing (any type) improved walking speed when comparing cues to no cues (mean difference (95% confidence interval): 0.08 m/s (0.06-0.10), I 2 = 68%, low quality of evidence). Remaining evidence was analysed narratively; evidence that cueing improves activity-related outcomes was inconsistent and rated as very low quality. It was not possible to determine which form of cueing may be more effective than others. Providing cues to encourage successful task performance is a core component of rehabilitation, however there is limited evidence on the type of cueing or which tasks benefit most from external cueing. Low-quality evidence suggests there may be a beneficial effect of cueing (any type) on walking speed. Sufficiently powered randomised controlled trials are needed to inform therapists of the most effective cueing strategies to improve activity performance in populations with a neurological disorder. Implications for rehabilitation Providing cues is a core component of rehabilitation and may improve successful task performance and activities in people with neurological conditions including stroke, Parkinson's disease, Alzheimer's disease, traumatic brain injury, and multiple sclerosis, but evidence is limited for most neurological conditions with much research focusing on stroke and Parkinson's disease. Therapists should consider using a range of different types of cues depending on the aims of treatment and the neurological condition. There is currently insufficient evidence to suggest one form of cueing is superior to other forms. Therapists should appreciate that responding optimally to cues may take many sessions to have an effect on activities such as walking. Further studies should be conducted over a longer timeframe to examine the effects of different types of cues towards task performance and activities in people with neurological conditions.
Multimodal cuing of autobiographical memory in semantic dementia.
Greenberg, Daniel L; Ogar, Jennifer M; Viskontas, Indre V; Gorno Tempini, Maria Luisa; Miller, Bruce; Knowlton, Barbara J
2011-01-01
Individuals with semantic dementia (SD) have impaired autobiographical memory (AM), but the extent of the impairment has been controversial. According to one report (Westmacott, Leach, Freedman, & Moscovitch, 2001), patient performance was better when visual cues were used instead of verbal cues; however, the visual cues used in that study (family photographs) provided more retrieval support than do the word cues that are typically used in AM studies. In the present study, we sought to disentangle the effects of retrieval support and cue modality. We cued AMs of 5 patients with SD and 5 controls with words, simple pictures, and odors. Memories were elicited from childhood, early adulthood, and recent adulthood; they were scored for level of detail and episodic specificity. The patients were impaired across all time periods and stimulus modalities. Within the patient group, words and pictures were equally effective as cues (Friedman test; χ² = 0.25, p = .61), whereas odors were less effective than both words and pictures (for words vs. odors, χ² = 7.83, p = .005; for pictures vs. odors, χ² = 6.18, p = .01). There was no evidence of a temporal gradient in either group (for patients with SD, χ² = 0.24, p = .89; for controls, χ² < 2.07, p = .35). Once the effect of retrieval support is equated across stimulus modalities, there is no evidence for an advantage of visual cues over verbal cues. The greater impairment for olfactory cues presumably reflects degeneration of anterior temporal regions that support olfactory memory. (c) 2010 APA, all rights reserved.
Different effects of color-based and location-based selection on visual working memory.
Li, Qi; Saiki, Jun
2015-02-01
In the present study, we investigated how feature- and location-based selection influences visual working memory (VWM) encoding and maintenance. In Experiment 1, cue type (color, location) and cue timing (precue, retro-cue) were manipulated in a change detection task. The stimuli were color-location conjunction objects, and binding memory was tested. We found a significantly greater effect for color precues than for either color retro-cues or location precues, but no difference between location pre- and retro-cues, consistent with previous studies (e.g., Griffin & Nobre in Journal of Cognitive Neuroscience, 15, 1176-1194, 2003). We also found no difference between location and color retro-cues. Experiment 2 replicated the color precue advantage with more complex color-shape-location conjunction objects. Only one retro-cue effect was different from that in Experiment 1: Color retro-cues were significantly less effective than location retro-cues in Experiment 2, which may relate to a structural property of multidimensional VWM representations. In Experiment 3, a visual search task was used, and the result of a greater location than color precue effect suggests that the color precue advantage in a memory task is related to the modulation of VWM encoding rather than of sensation and perception. Experiment 4, using a task that required only memory for individual features but not for feature bindings, further confirmed that the color precue advantage is specific to binding memory. Together, these findings reveal new aspects of the interaction between attention and VWM and provide potentially important implications for the structural properties of VWM representations.
Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco
2013-04-15
King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.
A designated odor-language integration system in the human brain.
Olofsson, Jonas K; Hurley, Robert S; Bowman, Nicholas E; Bao, Xiaojun; Mesulam, M-Marsel; Gottfried, Jay A
2014-11-05
Odors are surprisingly difficult to name, but the mechanism underlying this phenomenon is poorly understood. In experiments using event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI), we investigated the physiological basis of odor naming with a paradigm where olfactory and visual object cues were followed by target words that either matched or mismatched the cue. We hypothesized that word processing would not only be affected by its semantic congruency with the preceding cue, but would also depend on the cue modality (olfactory or visual). Performance was slower and less precise when linking a word to its corresponding odor than to its picture. The ERP index of semantic incongruity (N400), reflected in the comparison of nonmatching versus matching target words, was more constrained to posterior electrode sites and lasted longer on odor-cue (vs picture-cue) trials. In parallel, fMRI cross-adaptation in the right orbitofrontal cortex (OFC) and the left anterior temporal lobe (ATL) was observed in response to words when preceded by matching olfactory cues, but not by matching visual cues. Time-series plots demonstrated increased fMRI activity in OFC and ATL at the onset of the odor cue itself, followed by response habituation after processing of a matching (vs nonmatching) target word, suggesting that predictive perceptual representations in these regions are already established before delivery and deliberation of the target word. Together, our findings underscore the modality-specific anatomy and physiology of object identification in the human brain. Copyright © 2014 the authors 0270-6474/14/3414864-10$15.00/0.
"Tunnel Vision": A Possible Keystone Stimulus Control Deficit in Autistic Children.
ERIC Educational Resources Information Center
Rincover, Arnold; And Others
1986-01-01
Three autistic boys (ages 9-13) were trained to select a card containing a stimulus array comprised of three visual cues. Decreased distance between cues resulted in responses to more cues, increased distance to fewer cues. Distances did not affect the responding of children matched for mental and chronological age. (Author/JW)
Cue competition affects temporal dynamics of edge-assignment in human visual cortex.
Brooks, Joseph L; Palmer, Stephen E
2011-03-01
Edge-assignment determines the perception of relative depth across an edge and the shape of the closer side. Many cues determine edge-assignment, but relatively little is known about the neural mechanisms involved in combining these cues. Here, we manipulated extremal edge and attention cues to bias edge-assignment such that these two cues either cooperated or competed. To index their neural representations, we flickered figure and ground regions at different frequencies and measured the corresponding steady-state visual-evoked potentials (SSVEPs). Figural regions had stronger SSVEP responses than ground regions, independent of whether they were attended or unattended. In addition, competition and cooperation between the two edge-assignment cues significantly affected the temporal dynamics of edge-assignment processes. The figural SSVEP response peaked earlier when the cues causing it cooperated than when they competed, but sustained edge-assignment effects were equivalent for cooperating and competing cues, consistent with a winner-take-all outcome. These results provide physiological evidence that figure-ground organization involves competitive processes that can affect the latency of figural assignment.
Drosophila increase exploration after visually detecting predators.
de la Flor, Miguel; Chen, Lijian; Manson-Bishop, Claire; Chu, Tzu-Chun; Zamora, Kathya; Robbins, Danielle; Gunaratne, Gemunu; Roman, Gregg
2017-01-01
Novel stimuli elicit behaviors that are collectively known as specific exploration. These behaviors allow the animal to become more familiar with the novel objects within its environment. Specific exploration is frequently suppressed by defensive reactions to predator cues. Herein, we examine if this suppression occurs in Drosophila melanogaster by measuring the response of these flies to wild harvested predators. The flies used in our experiments have been cultured and had not lived under predator threat for multiple decades. In a circular arena with centrally-caged predators, wild type Drosophila actively avoided the pantropical jumping spider, Plexippus paykulli, and the Texas unicorn mantis, Phyllovates chlorophaena, indicating an innate defensive reaction to these predators. Interestingly, wild type Drosophila males also avoided a centrally-caged mock spider, and the avoidance of the mock spider became exaggerated when it was made to move within the cage. Visually impaired Drosophila failed to detect and avoid the Plexippus paykulli and the moving mock spider, while the broadly anosmic orco2 mutants were fully capable of detecting and avoiding Plexippus paykulli, indicating that these flies principally relied upon vison to perceive the predator stimuli. During early exploration of the arena, exploratory activity increased in the presence of Plexippus paykulli and the moving mock spider. The elevated activity induced by Plexippus paykulli disappeared after the fly had finished exploring, suggesting the flies were capable of habituating the predator cues. Taken together, these results indicate that despite being isolated from predators for decades Drosophila will visually detect these predators, retain innate defensive behaviors, respond by increasing exploratory activity in the arena rather than suppressing activity, and may habituate to normal predator cues.
Drosophila increase exploration after visually detecting predators
Manson-Bishop, Claire; Chu, Tzu-Chun; Zamora, Kathya; Robbins, Danielle; Gunaratne, Gemunu
2017-01-01
Novel stimuli elicit behaviors that are collectively known as specific exploration. These behaviors allow the animal to become more familiar with the novel objects within its environment. Specific exploration is frequently suppressed by defensive reactions to predator cues. Herein, we examine if this suppression occurs in Drosophila melanogaster by measuring the response of these flies to wild harvested predators. The flies used in our experiments have been cultured and had not lived under predator threat for multiple decades. In a circular arena with centrally-caged predators, wild type Drosophila actively avoided the pantropical jumping spider, Plexippus paykulli, and the Texas unicorn mantis, Phyllovates chlorophaena, indicating an innate defensive reaction to these predators. Interestingly, wild type Drosophila males also avoided a centrally-caged mock spider, and the avoidance of the mock spider became exaggerated when it was made to move within the cage. Visually impaired Drosophila failed to detect and avoid the Plexippus paykulli and the moving mock spider, while the broadly anosmic orco2 mutants were fully capable of detecting and avoiding Plexippus paykulli, indicating that these flies principally relied upon vison to perceive the predator stimuli. During early exploration of the arena, exploratory activity increased in the presence of Plexippus paykulli and the moving mock spider. The elevated activity induced by Plexippus paykulli disappeared after the fly had finished exploring, suggesting the flies were capable of habituating the predator cues. Taken together, these results indicate that despite being isolated from predators for decades Drosophila will visually detect these predators, retain innate defensive behaviors, respond by increasing exploratory activity in the arena rather than suppressing activity, and may habituate to normal predator cues. PMID:28746346
Late development of cue integration is linked to sensory fusion in cortex.
Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko
2015-11-02
Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex
Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko
2015-01-01
Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841
Milet-Pinheiro, Paulo; Ayasse, Manfred; Dötterl, Stefan
2015-01-01
Oligolectic bees collect pollen from a few plants within a genus or family to rear their offspring, and are known to rely on visual and olfactory floral cues to recognize host plants. However, studies investigating whether oligolectic bees recognize distinct host plants by using shared floral cues are scarce. In the present study, we investigated in a comparative approach the visual and olfactory floral cues of six Campanula species, of which only Campanula lactiflora has never been reported as a pollen source of the oligolectic bee Ch. rapunculi. We hypothesized that the flowers of Campanula species visited by Ch. rapunculi share visual (i.e. color) and/or olfactory cues (scents) that give them a host-specific signature. To test this hypothesis, floral color and scent were studied by spectrophotometric and chemical analyses, respectively. Additionally, we performed bioassays within a flight cage to test the innate color preference of Ch. rapunculi. Our results show that Campanula flowers reflect the light predominantly in the UV-blue/blue bee-color space and that Ch. rapunculi displays a strong innate preference for these two colors. Furthermore, we recorded spiroacetals in the floral scent of all Campanula species, but Ca. lactiflora. Spiroacetals, rarely found as floral scent constituents but quite common among Campanula species, were recently shown to play a key function for host-flower recognition by Ch. rapunculi. We conclude that Campanula species share some visual and olfactory floral cues, and that neurological adaptations (i.e. vision and olfaction) of Ch. rapunculi innately drive their foraging flights toward host flowers. The significance of our findings for the evolution of pollen diet breadth in bees is discussed. PMID:26060994
Multimedia instructions and cognitive load theory: effects of modality and cueing.
Tabbers, Huib K; Martens, Rob L; van Merriënboer, Jeroen J G
2004-03-01
Recent research on the influence of presentation format on the effectiveness of multimedia instructions has yielded some interesting results. According to cognitive load theory (Sweller, Van Merriënboer, & Paas, 1998) and Mayer's theory of multimedia learning (Mayer, 2001), replacing visual text with spoken text (the modality effect) and adding visual cues relating elements of a picture to the text (the cueing effect) both increase the effectiveness of multimedia instructions in terms of better learning results or less mental effort spent. The aim of this study was to test the generalisability of the modality and cueing effect in a classroom setting. The participants were 111 second-year students from the Department of Education at the University of Gent in Belgium (age between 19 and 25 years). The participants studied a web-based multimedia lesson on instructional design for about one hour. Afterwards they completed a retention and a transfer test. During both the instruction and the tests, self-report measures of mental effort were administered. Adding visual cues to the pictures resulted in higher retention scores, while replacing visual text with spoken text resulted in lower retention and transfer scores. Only a weak cueing effect and even a reverse modality effect have been found, indicating that both effects do not easily generalise to non-laboratory settings. A possible explanation for the reversed modality effect is that the multimedia instructions in this study were learner-paced, as opposed to the system-paced instructions used in earlier research.
Heni, Martin; Kullmann, Stephanie; Ketterer, Caroline; Guthoff, Martina; Bayer, Margarete; Staiger, Harald; Machicao, Fausto; Häring, Hans-Ulrich; Preissl, Hubert; Veit, Ralf; Fritsche, Andreas
2014-03-01
Eating behavior is crucial in the development of obesity and Type 2 diabetes. To further investigate its regulation, we studied the effects of glucose versus water ingestion on the neural processing of visual high and low caloric food cues in 12 lean and 12 overweight subjects by functional magnetic resonance imaging. We found body weight to substantially impact the brain's response to visual food cues after glucose versus water ingestion. Specifically, there was a significant interaction between body weight, condition (water versus glucose), and caloric content of food cues. Although overweight subjects showed a generalized reduced response to food objects in the fusiform gyrus and precuneus, the lean group showed a differential pattern to high versus low caloric foods depending on glucose versus water ingestion. Furthermore, we observed plasma insulin and glucose associated effects. The hypothalamic response to high caloric food cues negatively correlated with changes in blood glucose 30 min after glucose ingestion, while especially brain regions in the prefrontal cortex showed a significant negative relationship with increases in plasma insulin 120 min after glucose ingestion. We conclude that the postprandial neural processing of food cues is highly influenced by body weight especially in visual areas, potentially altering visual attention to food. Furthermore, our results underline that insulin markedly influences prefrontal activity to high caloric food cues after a meal, indicating that postprandial hormones may be potential players in modulating executive control. Copyright © 2013 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.
2010-01-01
Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…
Wingbeat frequency-sweep and visual stimuli for trapping male Aedes aegypti (Diptera: Culicidae)
USDA-ARS?s Scientific Manuscript database
Combinations of female wingbeat acoustic cues and visual cues were evaluated to determine their potential for use in male Aedes aegypti (L.) traps in peridomestic environments. A modified Centers for Disease control (CDC) light trap using a 350-500 Hz frequency-sweep broadcast from a speaker as an a...
USDA-ARS?s Scientific Manuscript database
This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...
Visual Cues Generated during Action Facilitate 14-Month-Old Infants' Mental Rotation
ERIC Educational Resources Information Center
Antrilli, Nick K.; Wang, Su-hua
2016-01-01
Although action experience has been shown to enhance the development of spatial cognition, the mechanism underlying the effects of action is still unclear. The present research examined the role of visual cues generated during action in promoting infants' mental rotation. We sought to clarify the underlying mechanism by decoupling different…
Categorically Defined Targets Trigger Spatiotemporal Visual Attention
ERIC Educational Resources Information Center
Wyble, Brad; Bowman, Howard; Potter, Mary C.
2009-01-01
Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…
Visual Cues and Listening Effort: Individual Variability
ERIC Educational Resources Information Center
Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.
2011-01-01
Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…
Visual Sonority Modulates Infants' Attraction to Sign Language
ERIC Educational Resources Information Center
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
2018-01-01
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Impaired Visual Attention in Children with Dyslexia.
ERIC Educational Resources Information Center
Heiervang, Einar; Hugdahl, Kenneth
2003-01-01
A cue-target visual attention task was administered to 25 children (ages 10-12) with dyslexia. Results showed a general pattern of slower responses in the children with dyslexia compared to controls. Subjects also had longer reaction times in the short and long cue-target interval conditions (covert and overt shift of attention). (Contains…
Visual Cues, Student Sex, Material Taught, and the Magnitude of Teacher Expectancy Effects.
ERIC Educational Resources Information Center
Badini, Aldo A.; Rosenthal, Robert
1989-01-01
Conducts an experiment on teacher expectancy effects to investigate the simultaneous effects of student gender, communication channel, and type of material taught (vocabulary and reasoning). Finds that the magnitude of teacher expectation effects was greater when students had access to visual cues, especially when the students were female. (MS)
Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search
ERIC Educational Resources Information Center
Geringswald, Franziska; Pollmann, Stefan
2015-01-01
Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…
Speaker Identity Supports Phonetic Category Learning
ERIC Educational Resources Information Center
Mani, Nivedita; Schneider, Signe
2013-01-01
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…
Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues
ERIC Educational Resources Information Center
Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.
2009-01-01
Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…
Enhancing Visual Search Abilities of People with Intellectual Disabilities
ERIC Educational Resources Information Center
Li-Tsang, Cecilia W. P.; Wong, Jackson K. K.
2009-01-01
This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using…
Heightened attentional capture by visual food stimuli in anorexia nervosa.
Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J
2017-08-01
The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya
2012-03-01
The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.
Subjective scaling of spatial room acoustic parameters influenced by visual environmental cues
Valente, Daniel L.; Braasch, Jonas
2010-01-01
Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music∕speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D∕R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment. PMID:20968367
Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth
2014-01-01
The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends.
Visually-guided attention enhances target identification in a complex auditory scene.
Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G
2007-06-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.
Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene
Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.
2007-01-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308
Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-09-20
Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.
Novelty Enhances Visual Salience Independently of Reward in the Parietal Lobe
Foley, Nicholas C.; Jangraw, David C.; Peck, Christopher
2014-01-01
Novelty modulates sensory and reward processes, but it remains unknown how these effects interact, i.e., how the visual effects of novelty are related to its motivational effects. A widespread hypothesis, based on findings that novelty activates reward-related structures, is that all the effects of novelty are explained in terms of reward. According to this idea, a novel stimulus is by default assigned high reward value and hence high salience, but this salience rapidly decreases if the stimulus signals a negative outcome. Here we show that, contrary to this idea, novelty affects visual salience in the monkey lateral intraparietal area (LIP) in ways that are independent of expected reward. Monkeys viewed peripheral visual cues that were novel or familiar (received few or many exposures) and predicted whether the trial will have a positive or a negative outcome—i.e., end in a reward or a lack of reward. We used a saccade-based assay to detect whether the cues automatically attracted or repelled attention from their visual field location. We show that salience—measured in saccades and LIP responses—was enhanced by both novelty and positive reward associations, but these factors were dissociable and habituated on different timescales. The monkeys rapidly recognized that a novel stimulus signaled a negative outcome (and withheld anticipatory licking within the first few presentations), but the salience of that stimulus remained high for multiple subsequent presentations. Therefore, novelty can provide an intrinsic bonus for attention that extends beyond the first presentation and is independent of physical rewards. PMID:24899716
Wayfinding and Glaucoma: A Virtual Reality Experiment.
Daga, Fábio B; Macagno, Eduardo; Stevenson, Cory; Elhosseiny, Ahmed; Diniz-Filho, Alberto; Boer, Erwin R; Schulze, Jürgen; Medeiros, Felipe A
2017-07-01
Wayfinding, the process of determining and following a route between an origin and a destination, is an integral part of everyday tasks. The purpose of this study was to investigate the impact of glaucomatous visual field loss on wayfinding behavior using an immersive virtual reality (VR) environment. This cross-sectional study included 31 glaucomatous patients and 20 healthy subjects without evidence of overall cognitive impairment. Wayfinding experiments were modeled after the Morris water maze navigation task and conducted in an immersive VR environment. Two rooms were built varying only in the complexity of the visual scene in order to promote allocentric-based (room A, with multiple visual cues) versus egocentric-based (room B, with single visual cue) spatial representations of the environment. Wayfinding tasks in each room consisted of revisiting previously visible targets that subsequently became invisible. For room A, glaucoma patients spent on average 35.0 seconds to perform the wayfinding task, whereas healthy subjects spent an average of 24.4 seconds (P = 0.001). For room B, no statistically significant difference was seen on average time to complete the task (26.2 seconds versus 23.4 seconds, respectively; P = 0.514). For room A, each 1-dB worse binocular mean sensitivity was associated with 3.4% (P = 0.001) increase in time to complete the task. Glaucoma patients performed significantly worse on allocentric-based wayfinding tasks conducted in a VR environment, suggesting visual field loss may affect the construction of spatial cognitive maps relevant to successful wayfinding. VR environments may represent a useful approach for assessing functional vision endpoints for clinical trials of emerging therapies in ophthalmology.
Novelty enhances visual salience independently of reward in the parietal lobe.
Foley, Nicholas C; Jangraw, David C; Peck, Christopher; Gottlieb, Jacqueline
2014-06-04
Novelty modulates sensory and reward processes, but it remains unknown how these effects interact, i.e., how the visual effects of novelty are related to its motivational effects. A widespread hypothesis, based on findings that novelty activates reward-related structures, is that all the effects of novelty are explained in terms of reward. According to this idea, a novel stimulus is by default assigned high reward value and hence high salience, but this salience rapidly decreases if the stimulus signals a negative outcome. Here we show that, contrary to this idea, novelty affects visual salience in the monkey lateral intraparietal area (LIP) in ways that are independent of expected reward. Monkeys viewed peripheral visual cues that were novel or familiar (received few or many exposures) and predicted whether the trial will have a positive or a negative outcome--i.e., end in a reward or a lack of reward. We used a saccade-based assay to detect whether the cues automatically attracted or repelled attention from their visual field location. We show that salience--measured in saccades and LIP responses--was enhanced by both novelty and positive reward associations, but these factors were dissociable and habituated on different timescales. The monkeys rapidly recognized that a novel stimulus signaled a negative outcome (and withheld anticipatory licking within the first few presentations), but the salience of that stimulus remained high for multiple subsequent presentations. Therefore, novelty can provide an intrinsic bonus for attention that extends beyond the first presentation and is independent of physical rewards. Copyright © 2014 the authors 0270-6474/14/347947-11$15.00/0.
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Interaction of color and geometric cues in depth perception: when does "red" mean "near"?
Guibal, Christophe R C; Dresp, Birgitta
2004-12-01
Luminance and color are strong and self-sufficient cues to pictorial depth in visual scenes and images. The present study investigates the conditions under which luminance or color either strengthens or overrides geometric depth cues. We investigated how luminance contrast associated with the color red and color contrast interact with relative height in the visual field, partial occlusion, and interposition to determine the probability that a given figure presented in a pair is perceived as "nearer" than the other. Latencies of "near" responses were analyzed to test for effects of attentional selection. Figures in a pair were supported by luminance contrast (Experiment 1) or isoluminant color contrast (Experiment 2) and combined with one of the three geometric cues. The results of Experiment 1 show that the luminance contrast of a color (here red), when it does not interact with other colors, produces the same effects as achromatic luminance contrasts. The probability of "near" increases with the luminance contrast of the color stimulus, the latencies for "near" responses decrease with increasing luminance contrast. Partial occlusion is found to be a strong enough pictorial cue to support a weaker red luminance contrast. Interposition cues lose out against cues of spatial position and partial occlusion. The results of Experiment 2, with isoluminant displays of varying color contrast, reveal that red color contrast on a light background supported by any of the three geometric cues wins over green or white supported by any of the three geometric cues. On a dark background, red color contrast supported by the interposition cue loses out against green or white color contrast supported by partial occlusion. These findings reveal that color is not an independent depth cue, but is strongly influenced by luminance contrast and stimulus geometry. Systematically shorter response latencies for stronger "near" percepts demonstrate that selective visual attention reliably detects the most likely depth cue combination in a given configuration.
"On" freezing in Parkinson's disease: resistance to visual cue walking devices.
Kompoliti, K; Goetz, C G; Leurgans, S; Morrissey, M; Siegel, I M
2000-03-01
To measure "on" freezing during unassisted walking (UW) and test if two devices, a modified inverted stick (MIS) and a visual laser beam stick (LBS) improved walking speed and number of "on" freezing episodes in patients with Parkinson's disease (PD). Multiple visual cues can overcome "off' freezing episodes and can be useful in improving gait function in parkinsonian patients. These devices have not been specifically tested in "on" freezing, which is unresponsive to pharmacologic manipulations. Patients with PD, motor fluctuations and freezing while "on," attempted walking on a 60-ft track with each of three walking conditions in a randomized order: UW, MIS, and LBS. Total time to complete a trial, number of freezes, and the ratio of walking time to the number of freezes were compared using Friedman's test. Twenty-eight patients with PD, mean age 67.81 years (standard deviation [SD] 7.54), mean disease duration 13.04 years (SD 7.49), and mean motor Unified Parkinson's Disease Rating Scale score "on" 32.59 (SD 10.93), participated in the study. There was a statistically significant correlation of time needed to complete a trial and number of freezes for all three conditions (Spearman correlations: UW 0.973, LBS 0.0.930, and MIS 0.842). The median number of freezes, median time to walk in each condition, and median walking time per freeze were not significantly different in pairwise comparisons of the three conditions (Friedman's test). Of the 28 subjects, six showed improvement with the MIS and six with the LBS in at least one outcome measure. Assisting devices, specifically based on visual cues, are not consistently beneficial in overcoming "on" freezing in most patients with PD. Because this is an otherwise untreatable clinical problem and because occasional subjects do respond, cautious trials of such devices under the supervision of a health professional should be conducted to identify those patients who might benefit from their long-term use.
Camouflage, communication and thermoregulation: lessons from colour changing organisms.
Stuart-Fox, Devi; Moussalli, Adnan
2009-02-27
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation.
Camouflage, communication and thermoregulation: lessons from colour changing organisms
Stuart-Fox, Devi; Moussalli, Adnan
2008-01-01
Organisms capable of rapid physiological colour change have become model taxa in the study of camouflage because they are able to respond dynamically to the changes in their visual environment. Here, we briefly review the ways in which studies of colour changing organisms have contributed to our understanding of camouflage and highlight some unique opportunities they present. First, from a proximate perspective, comparison of visual cues triggering camouflage responses and the visual perception mechanisms involved can provide insight into general visual processing rules. Second, colour changing animals can potentially tailor their camouflage response not only to different backgrounds but also to multiple predators with different visual capabilities. We present new data showing that such facultative crypsis may be widespread in at least one group, the dwarf chameleons. From an ultimate perspective, we argue that colour changing organisms are ideally suited to experimental and comparative studies of evolutionary interactions between the three primary functions of animal colour patterns: camouflage; communication; and thermoregulation. PMID:19000973
Food portion size and energy density evoke different patterns of brain activation in children12
Fearnbach, S Nicole; Wilson, Stephen J; Fisher, Jennifer O; Savage, Jennifer S; Rolls, Barbara J; Keller, Kathleen L
2017-01-01
Background: Large portions of food promote intake, but the mechanisms that drive this effect are unclear. Previous neuroimaging studies have identified the brain-reward and decision-making systems that are involved in the response to the energy density (ED) (kilocalories per gram) of foods, but few studies have examined the brain response to the food portion size (PS). Objective: We used functional MRI (fMRI) to determine the brain response to food images that differed in PSs (large and small) and ED (high and low). Design: Block-design fMRI was used to assess the blood oxygen level–dependent (BOLD) response to images in 36 children (7–10 y old; girls: 50%), which was tested after a 2-h fast. Pre-fMRI fullness and liking were rated on visual analog scales. A whole-brain cluster-corrected analysis was used to compare BOLD activation for main effects of the PS, ED, and their interaction. Secondary analyses were used to associate BOLD contrast values with appetitive traits and laboratory intake from meals for which the portions of all foods were increased. Results: Compared with small-PS cues, large-PS cues were associated with decreased activation in the inferior frontal gyrus (P < 0.01). Compared with low-ED cues, high-ED cues were associated with increased activation in multiple regions (e.g., in the caudate, cingulate, and precentral gyrus) and decreased activation in the insula and superior temporal gyrus (P < 0.01 for all). A PS × ED interaction was shown in the superior temporal gyrus (P < 0.01). BOLD contrast values for high-ED cues compared with low-ED cues in the insula, declive, and precentral gyrus were negatively related to appetitive traits (P < 0.05). There were no associations between the brain response to the PS and either appetitive traits or intake. Conclusions: Cues regarding food PS may be processed in the lateral prefrontal cortex, which is a region that is implicated in cognitive control, whereas ED activates multiple areas involved in sensory and reward processing. Possible implications include the development of interventions that target decision-making and reward systems differently to moderate overeating. PMID:27881393
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.
Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao
2016-12-01
In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.
Gait parameter control timing with dynamic manual contact or visual cues
Shi, Peter; Werner, William
2016-01-01
We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. PMID:26936979
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality
NASA Astrophysics Data System (ADS)
Hua, Hong
2017-05-01
Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Do preschool children learn to read words from environmental prints?
Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su
2014-01-01
Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.
Do Preschool Children Learn to Read Words from Environmental Prints?
Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su
2014-01-01
Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677
Davidson, Melanie M; Butler, Ruth C; Teulon, David A J
2006-07-01
The effects of starvation or age on the walking or flying response of female Frankliniella occidentalis to visual and/or odor cues in two types of olfactometer were examined in the laboratory. The response of walking thrips starved for 0, 1, 4, or 24h to an odor cue (1microl of 10% p-anisaldehyde) was examined in a Y-tube olfactometer. The take-off and landing response of thrips (unknown age) starved for 0, 1, 4, 24, 48 or 72h, or of thrips of different ages (2-3 days or 10-13 days post-adult emergence) starved for 24h, to a visual cue (98 cm(2) yellow sticky trap) and/or an odor cue (0.5 or 1.0 ml p-anisaldehyde) was examined in a wind tunnel. More thrips walked up the odor-laden arm in the Y-tube when starved for at least 4h (76%) than satiated thrips (58.7%) or those starved for 1h (62.7%, P<0.05). In the wind tunnel experiments the percentage of thrips to fly or land on the sticky trap increased between satiated thrips (7.3% to fly, 3.3% on trap) and those starved for 4h (81.2% to fly, 29% on trap) and decreased between thrips starved for 48 (74.5% to fly, 23% on trap) and 72 h (56.5% to fly, 15.5% on trap, P<0.05). Fewer younger thrips (38.8%) landed on a sticky trap containing a yellow visual cue of, those that flew, than older thrips (70.4%, P<0.05), although a similar percentage of thrips flew regardless of age or type of cue present in the wind tunnel (average 44%, P>0.05).
The Role of Visual Cues in Microgravity Spatial Orientation
NASA Technical Reports Server (NTRS)
Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.
2003-01-01
In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.
Ridley-Siegert, Thomas L; Crombag, Hans S; Yeomans, Martin R
2015-12-01
There is a wealth of data showing a large impact of food cues on human ingestion, yet most studies use pictures of food where the precise nature of the associations between the cue and food is unclear. To test whether novel cues which were associated with the opportunity of winning access to food images could also impact ingestion, 63 participants participated in a game in which novel visual cues signalled whether responding on a keyboard would win (a picture of) chocolate, crisps, or nothing. Thirty minutes later, participants were given an ad libitum snack-intake test during which the chocolate-paired cue, the crisp-paired cue, the non-winning cue and no cue were presented as labels on the food containers. The presence of these cues significantly altered overall intake of the snack foods; participants presented with food labelled with the cue that had been associated with winning chocolate ate significantly more than participants who had been given the same products labelled with the cue associated with winning nothing, and in the presence of the cue signalling the absence of food reward participants tended to eat less than all other conditions. Surprisingly, cue-dependent changes in food consumption were unaffected by participants' level of contingency awareness. These results suggest that visual cues that have been pre-associated with winning, but not consuming, a liked food reward modify food intake consistent with current ideas that the abundance of food associated cues may be one factor underlying the 'obesogenic environment'. Copyright © 2015 Elsevier Inc. All rights reserved.
Gettig, Jacob P
2006-04-01
To determine the prevalence of established multiple-choice test-taking correct and incorrect answer cues in the American College of Clinical Pharmacy's Updates in Therapeutics: The Pharmacotherapy Preparatory Course, 2005 Edition, as an equal or lesser surrogate indication of the prevalence of such cues in the Pharmacotherapy board certification examination. All self-assessment and patient case question-and-answer sets were assessed individually to determine if they were subject to selected correct and incorrect answer cues commonly seen in multiple-choice question writing. If the question was considered evaluable, correct answer cues-longest answer, mid-range number, one of two similar choices, and one of two opposite choices-were tallied. In addition, incorrect answer cues- inclusionary language and grammatical mismatch-were also tallied. Each cue was counted if it did what was expected or did the opposite of what was expected. Multiple cues could be identified in each question. A total of 237 (47.7%) of 497 questions in the manual were deemed evaluable. A total of 325 correct answer cues and 35 incorrect answer cues were identified in the 237 evaluable questions. Most evaluable questions contained one to two correct and/or incorrect answer cue(s). Longest answer was the most frequently identified correct answer cue; however, it was the least likely to identify the correct answer. Inclusionary language was the most frequently identified incorrect answer cue. Incorrect answer cues were considerably more likely to identify incorrect answer choices than correct answer cues were able to identify correct answer choices. The use of established multiple-choice test-taking cues is unlikely to be of significant help when taking the Pharmacotherapy board certification examination, primarily because of the lack of questions subject to such cues and the inability of correct answer cues to accurately identify correct answers. Incorrect answer cues, especially the use of inclusionary language, almost always will accurately identify an incorrect answer choice. Assuming that questions in the preparatory course manual were equal or lesser surrogates of those in the board certification examination, it is unlikely that intuition alone can replace adequate preparation and studying as the sole determinant of examination success.
Eckstein, Miguel P; Mack, Stephen C; Liston, Dorion B; Bogush, Lisa; Menzel, Randolf; Krauzlis, Richard J
2013-06-07
Visual attention is commonly studied by using visuo-spatial cues indicating probable locations of a target and assessing the effect of the validity of the cue on perceptual performance and its neural correlates. Here, we adapt a cueing task to measure spatial cueing effects on the decisions of honeybees and compare their behavior to that of humans and monkeys in a similarly structured two-alternative forced-choice perceptual task. Unlike the typical cueing paradigm in which the stimulus strength remains unchanged within a block of trials, for the monkey and human studies we randomized the contrast of the signal to simulate more real world conditions in which the organism is uncertain about the strength of the signal. A Bayesian ideal observer that weights sensory evidence from cued and uncued locations based on the cue validity to maximize overall performance is used as a benchmark of comparison against the three animals and other suboptimal models: probability matching, ignore the cue, always follow the cue, and an additive bias/single decision threshold model. We find that the cueing effect is pervasive across all three species but is smaller in size than that shown by the Bayesian ideal observer. Humans show a larger cueing effect than monkeys and bees show the smallest effect. The cueing effect and overall performance of the honeybees allows rejection of the models in which the bees are ignoring the cue, following the cue and disregarding stimuli to be discriminated, or adopting a probability matching strategy. Stimulus strength uncertainty also reduces the theoretically predicted variation in cueing effect with stimulus strength of an optimal Bayesian observer and diminishes the size of the cueing effect when stimulus strength is low. A more biologically plausible model that includes an additive bias to the sensory response from the cued location, although not mathematically equivalent to the optimal observer for the case stimulus strength uncertainty, can approximate the benefits of the more computationally complex optimal Bayesian model. We discuss the implications of our findings on the field's common conceptualization of covert visual attention in the cueing task and what aspects, if any, might be unique to humans. Copyright © 2013 Elsevier Ltd. All rights reserved.
An Investigation of Visual, Aural, Motion and Control Movement Cues.
ERIC Educational Resources Information Center
Matheny, W. G.; And Others
A study was conducted to determine the ways in which multi-sensory cues can be simulated and effectively used in the training of pilots. Two analytical bases, one called the stimulus environment approach and the other an information array approach, are developed along with a cue taxonomy. Cues are postulated on the basis of information gained from…
Horwood, Anna M; Riddell, Patricia M
2009-01-01
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Horwood, Anna M; Riddell, Patricia M
2015-01-01
Binocular disparity, blur and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3m and 2m. By separating the three main near cues we can explore their relative weighting in three, two, one and zero cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable inter-participant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development and emmetropisation. PMID:19301186
Mergner, T; Schweigart, G; Maurer, C; Blümle, A
2005-12-01
The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.
NASA Technical Reports Server (NTRS)
Young, L. R.; Oman, C. M.; Curry, R. E.
1977-01-01
Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.
The role of visual and mechanosensory cues in structuring forward flight in Drosophila melanogaster.
Budick, Seth A; Reiser, Michael B; Dickinson, Michael H
2007-12-01
It has long been known that many flying insects use visual cues to orient with respect to the wind and to control their groundspeed in the face of varying wind conditions. Much less explored has been the role of mechanosensory cues in orienting insects relative to the ambient air. Here we show that Drosophila melanogaster, magnetically tethered so as to be able to rotate about their yaw axis, are able to detect and orient into a wind, as would be experienced during forward flight. Further, this behavior is velocity dependent and is likely subserved, at least in part, by the Johnston's organs, chordotonal organs in the antennae also involved in near-field sound detection. These wind-mediated responses may help to explain how flies are able to fly forward despite visual responses that might otherwise inhibit this behavior. Expanding visual stimuli, such as are encountered during forward flight, are the most potent aversive visual cues known for D. melanogaster flying in a tethered paradigm. Accordingly, tethered flies strongly orient towards a focus of contraction, a problematic situation for any animal attempting to fly forward. We show in this study that wind stimuli, transduced via mechanosensory means, can compensate for the aversion to visual expansion and thus may help to explain how these animals are indeed able to maintain forward flight.