Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B
2004-01-01
The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.
Effect of negative emotions evoked by light, noise and taste on trigeminal thermal sensitivity.
Yang, Guangju; Baad-Hansen, Lene; Wang, Kelun; Xie, Qiu-Fei; Svensson, Peter
2014-11-07
Patients with migraine often have impaired somatosensory function and experience headache attacks triggered by exogenous stimulus, such as light, sound or taste. This study aimed to assess the influence of three controlled conditioning stimuli (visual, auditory and gustatory stimuli and combined stimuli) on affective state and thermal sensitivity in healthy human participants. All participants attended four experimental sessions with visual, auditory and gustatory conditioning stimuli and combination of all stimuli, in a randomized sequence. In each session, the somatosensory sensitivity was tested in the perioral region with use of thermal stimuli with and without the conditioning stimuli. Positive and Negative Affect States (PANAS) were assessed before and after the tests. Subject based ratings of the conditioning and test stimuli in addition to skin temperature and heart rate as indicators of arousal responses were collected in real time during the tests. The three conditioning stimuli all induced significant increases in negative PANAS scores (paired t-test, P ≤0.016). Compared with baseline, the increases were in a near dose-dependent manner during visual and auditory conditioning stimulation. No significant effects of any single conditioning stimuli were observed on trigeminal thermal sensitivity (P ≥0.051) or arousal parameters (P ≥0.057). The effects of combined conditioning stimuli on subjective ratings (P ≤0.038) and negative affect (P = 0.011) were stronger than those of single stimuli. All three conditioning stimuli provided a simple way to evoke a negative affective state without physical arousal or influence on trigeminal thermal sensitivity. Multisensory conditioning had stronger effects but also failed to modulate thermal sensitivity, suggesting that so-called exogenous trigger stimuli e.g. bright light, noise, unpleasant taste in patients with migraine may require a predisposed or sensitized nervous system.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
[Intermodal timing cues for audio-visual speech recognition].
Hashimoto, Masahiro; Kumashiro, Masaharu
2004-06-01
The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.
Honeybees in a virtual reality environment learn unique combinations of colour and shape.
Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A
2017-10-01
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.
Walter, Sabrina; Keitel, Christian; Müller, Matthias M
2016-01-01
Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.
ERIC Educational Resources Information Center
Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn
2011-01-01
Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…
Interpersonal touch suppresses visual processing of aversive stimuli
Kawamichi, Hiroaki; Kitada, Ryo; Yoshihara, Kazufumi; Takahashi, Haruka K.; Sadato, Norihiro
2015-01-01
Social contact is essential for survival in human society. A previous study demonstrated that interpersonal contact alleviates pain-related distress by suppressing the activity of its underlying neural network. One explanation for this is that attention is shifted from the cause of distress to interpersonal contact. To test this hypothesis, we conducted a functional MRI (fMRI) study wherein eight pairs of close female friends rated the aversiveness of aversive and non-aversive visual stimuli under two conditions: joining hands either with a rubber model (rubber-hand condition) or with a close friend (human-hand condition). Subsequently, participants rated the overall comfortableness of each condition. The rating result after fMRI indicated that participants experienced greater comfortableness during the human-hand compared to the rubber-hand condition, whereas aversiveness ratings during fMRI were comparable across conditions. The fMRI results showed that the two conditions commonly produced aversive-related activation in both sides of the visual cortex (including V1, V2, and V5). An interaction between aversiveness and hand type showed rubber-hand-specific activation for (aversive > non-aversive) in other visual areas (including V1, V2, V3, and V4v). The effect of interpersonal contact on the processing of aversive stimuli was negatively correlated with the increment of attentional focus to aversiveness measured by a pain-catastrophizing scale. These results suggest that interpersonal touch suppresses the processing of aversive visual stimuli in the occipital cortex. This effect covaried with aversiveness-insensitivity, such that aversive-insensitive individuals might require a lesser degree of attentional capture to aversive-stimulus processing. As joining hands did not influence the subjective ratings of aversiveness, interpersonal touch may operate by redirecting excessive attention away from aversive characteristics of the stimuli. PMID:25904856
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Decreased visual detection during subliminal stimulation.
Bareither, Isabelle; Villringer, Arno; Busch, Niko A
2014-10-17
What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.
Accessory stimulus modulates executive function during stepping task
Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo
2015-01-01
When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321
Song, Jae-Jin; Lee, Hyo-Jeong; Kang, Hyejin; Lee, Dong Soo; Chang, Sun O; Oh, Seung Ha
2015-03-01
While deafness-induced plasticity has been investigated in the visual and auditory domains, not much is known about language processing in audiovisual multimodal environments for patients with restored hearing via cochlear implant (CI) devices. Here, we examined the effect of agreeing or conflicting visual inputs on auditory processing in deaf patients equipped with degraded artificial hearing. Ten post-lingually deafened CI users with good performance, along with matched control subjects, underwent H 2 (15) O-positron emission tomography scans while carrying out a behavioral task requiring the extraction of speech information from unimodal auditory stimuli, bimodal audiovisual congruent stimuli, and incongruent stimuli. Regardless of congruency, the control subjects demonstrated activation of the auditory and visual sensory cortices, as well as the superior temporal sulcus, the classical multisensory integration area, indicating a bottom-up multisensory processing strategy. Compared to CI users, the control subjects exhibited activation of the right ventral premotor-supramarginal pathway. In contrast, CI users activated primarily the visual cortices more in the congruent audiovisual condition than in the null condition. In addition, compared to controls, CI users displayed an activation focus in the right amygdala for congruent audiovisual stimuli. The most notable difference between the two groups was an activation focus in the left inferior frontal gyrus in CI users confronted with incongruent audiovisual stimuli, suggesting top-down cognitive modulation for audiovisual conflict. Correlation analysis revealed that good speech performance was positively correlated with right amygdala activity for the congruent condition, but negatively correlated with bilateral visual cortices regardless of congruency. Taken together these results suggest that for multimodal inputs, cochlear implant users are more vision-reliant when processing congruent stimuli and are disturbed more by visual distractors when confronted with incongruent audiovisual stimuli. To cope with this multimodal conflict, CI users activate the left inferior frontal gyrus to adopt a top-down cognitive modulation pathway, whereas normal hearing individuals primarily adopt a bottom-up strategy.
Preschoolers' speed of locating a target symbol under different color conditions.
Wilkinson, Krista M; Carlin, Michael; Jagaroo, Vinoth
2006-06-01
A pressing decision in AAC concerns the organization of aided visual symbols. One recent proposal suggested that basic principles of visual processing may be important determinants of how easily a symbol is found in an array, and that this, in turn will influence more functional outcomes like symbol identification or use. This study examined the role of color on accuracy and speed of symbol location by 16 preschool children without disabilities. Participants searched for a target stimulus in an array of eight stimuli. In the same-color condition, the eight stimuli were all red; in the guided search condition, four of the stimuli were red and four were yellow; in the unique-color condition, all stimuli were unique colors. Accuracy was higher and reaction time was faster when stimuli were unique colors than when they were all one color. Reaction time and accuracy did not differ under the guided search and the color-unique conditions. The implications for AAC are discussed.
ERIC Educational Resources Information Center
Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert
2012-01-01
The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…
Body Context and Posture Affect Mental Imagery of Hands
Ionta, Silvio; Perruchoud, David; Draganski, Bogdan; Blanke, Olaf
2012-01-01
Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively. PMID:22479618
Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano
2017-01-01
The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Semantic congruency and the (reversed) Colavita effect in children and adults.
Wille, Claudia; Ebersbach, Mirjam
2016-01-01
When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.
Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age
ERIC Educational Resources Information Center
Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.
2013-01-01
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-05-15
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
The primate amygdala represents the positive and negative value of visual stimuli during learning
Paton, Joseph J.; Belova, Marina A.; Morrison, Sara E.; Salzman, C. Daniel
2008-01-01
Visual stimuli can acquire positive or negative value through their association with rewards and punishments, a process called reinforcement learning. Although we now know a great deal about how the brain analyses visual information, we know little about how visual representations become linked with values. To study this process, we turned to the amygdala, a brain structure implicated in reinforcement learning1–5. We recorded the activity of individual amygdala neurons in monkeys while abstract images acquired either positive or negative value through conditioning. After monkeys had learned the initial associations, we reversed image value assignments. We examined neural responses in relation to these reversals in order to estimate the relative contribution to neural activity of the sensory properties of images and their conditioned values. Here we show that changes in the values of images modulate neural activity, and that this modulation occurs rapidly enough to account for, and correlates with, monkeys’ learning. Furthermore, distinct populations of neurons encode the positive and negative values of visual stimuli. Behavioural and physiological responses to visual stimuli may therefore be based in part on the plastic representation of value provided by the amygdala. PMID:16482160
Fear conditioning to subliminal fear relevant and non fear relevant stimuli.
Lipp, Ottmar V; Kempnich, Clare; Jee, Sang Hoon; Arnold, Derek H
2014-01-01
A growing body of evidence suggests that conscious visual awareness is not a prerequisite for human fear learning. For instance, humans can learn to be fearful of subliminal fear relevant images--images depicting stimuli thought to have been fear relevant in our evolutionary context, such as snakes, spiders, and angry human faces. Such stimuli could have a privileged status in relation to manipulations used to suppress usually salient images from awareness, possibly due to the existence of a designated sub-cortical 'fear module'. Here we assess this proposition, and find it wanting. We use binocular masking to suppress awareness of images of snakes and wallabies (particularly cute, non-threatening marsupials). We find that subliminal presentations of both classes of image can induce differential fear conditioning. These data show that learning, as indexed by fear conditioning, is neither contingent on conscious visual awareness nor on subliminal conditional stimuli being fear relevant.
Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces
NASA Astrophysics Data System (ADS)
Waytowich, Nicholas R.; Krusienski, Dean J.
2015-06-01
Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.
Visual and vestibular components of motion sickness.
Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D
1996-10-01
The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.
Effects of auditory and visual modalities in recall of words.
Gadzella, B M; Whitehead, D A
1975-02-01
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
Inhibition of Return in the Visual Field
Bao, Yan; Lei, Quan; Fang, Yuan; Tong, Yu; Schill, Kerstin; Pöppel, Ernst; Strasburger, Hans
2013-01-01
Inhibition of return (IOR) as an indicator of attentional control is characterized by an eccentricity effect, that is, the more peripheral visual field shows a stronger IOR magnitude relative to the perifoveal visual field. However, it could be argued that this eccentricity effect may not be an attention effect, but due to cortical magnification. To test this possibility, we examined this eccentricity effect in two conditions: the same-size condition in which identical stimuli were used at different eccentricities, and the size-scaling condition in which stimuli were scaled according to the cortical magnification factor (M-scaling), thus stimuli being larger at the more peripheral locations. The results showed that the magnitude of IOR was significantly stronger in the peripheral relative to the perifoveal visual field, and this eccentricity effect was independent of the manipulation of stimulus size (same-size or size-scaling). These results suggest a robust eccentricity effect of IOR which cannot be eliminated by M-scaling. Underlying neural mechanisms of the eccentricity effect of IOR are discussed with respect to both cortical and subcortical structures mediating attentional control in the perifoveal and peripheral visual field. PMID:23820946
Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.
Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro
2017-08-01
Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
NASA Astrophysics Data System (ADS)
Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.
1995-08-01
A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.
Effects of set-size and lateral masking in visual search.
Põder, Endel
2004-01-01
In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.
Attentional load modulates responses of human primary visual cortex to invisible stimuli.
Bahrami, Bahador; Lavie, Nilli; Rees, Geraint
2007-03-20
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.
Visual memories for perceived length are well preserved in older adults.
Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N
2011-09-15
Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.
Boosting pitch encoding with audiovisual interactions in congenital amusia.
Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne
2015-01-01
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning
2011-11-14
In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Probing the influence of unconscious fear-conditioned visual stimuli on eye movements.
Madipakkam, Apoorva Rajiv; Rothkirch, Marcus; Wilbertz, Gregor; Sterzer, Philipp
2016-11-01
Efficient threat detection from the environment is critical for survival. Accordingly, fear-conditioned stimuli receive prioritized processing and capture overt and covert attention. However, it is unknown whether eye movements are influenced by unconscious fear-conditioned stimuli. We performed a classical fear-conditioning procedure and subsequently recorded participants' eye movements while they were exposed to fear-conditioned stimuli that were rendered invisible using interocular suppression. Chance-level performance in a forced-choice-task demonstrated unawareness of the stimuli. Differential skin conductance responses and a change in participants' fearfulness ratings of the stimuli indicated the effectiveness of conditioning. However, eye movements were not biased towards the fear-conditioned stimulus. Preliminary evidence suggests a relation between the strength of conditioning and the saccadic bias to the fear-conditioned stimulus. Our findings provide no strong evidence for a saccadic bias towards unconscious fear-conditioned stimuli but tentative evidence suggests that such an effect may depend on the strength of the conditioned response. Copyright © 2016 Elsevier Inc. All rights reserved.
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
Age-related differences in audiovisual interactions of semantically different stimuli.
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
2017-01-01
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
Takano, Kouji; Komatsu, Tomoaki; Hata, Naoki; Nakajima, Yasoichi; Kansaku, Kenji
2009-08-01
The white/gray flicker matrix has been used as a visual stimulus for the so-called P300 brain-computer interface (BCI), but the white/gray flash stimuli might induce discomfort. In this study, we investigated the effectiveness of green/blue flicker matrices as visual stimuli. Ten able-bodied, non-trained subjects performed Alphabet Spelling (Japanese Alphabet: Hiragana) using an 8 x 10 matrix with three types of intensification/rest flicker combinations (L, luminance; C, chromatic; LC, luminance and chromatic); both online and offline performances were evaluated. The accuracy rate under the online LC condition was 80.6%. Offline analysis showed that the LC condition was associated with significantly higher accuracy than was the L or C condition (Tukey-Kramer, p < 0.05). No significant difference was observed between L and C conditions. The LC condition, which used the green/blue flicker matrix was associated with better performances in the P300 BCI. The green/blue chromatic flicker matrix can be an efficient tool for practical BCI application.
Bao, Yan; Lei, Quan; Fang, Yuan; Tong, Yu; Schill, Kerstin; Pöppel, Ernst; Strasburger, Hans
2013-01-01
Inhibition of return (IOR) as an indicator of attentional control is characterized by an eccentricity effect, that is, the more peripheral visual field shows a stronger IOR magnitude relative to the perifoveal visual field. However, it could be argued that this eccentricity effect may not be an attention effect, but due to cortical magnification. To test this possibility, we examined this eccentricity effect in two conditions: the same-size condition in which identical stimuli were used at different eccentricities, and the size-scaling condition in which stimuli were scaled according to the cortical magnification factor (M-scaling), thus stimuli being larger at the more peripheral locations. The results showed that the magnitude of IOR was significantly stronger in the peripheral relative to the perifoveal visual field, and this eccentricity effect was independent of the manipulation of stimulus size (same-size or size-scaling). These results suggest a robust eccentricity effect of IOR which cannot be eliminated by M-scaling. Underlying neural mechanisms of the eccentricity effect of IOR are discussed with respect to both cortical and subcortical structures mediating attentional control in the perifoveal and peripheral visual field.
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations. PMID:29326578
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Vibrotactile timing: Are vibrotactile judgements of duration affected by repetitive stimulation?
Jones, Luke A; Ogden, Ruth S
2016-01-01
Timing in the vibrotactile modality was explored. Previous research has shown that repetitive auditory stimulation (in the form of click-trains) and visual stimulation (in the form of flickers) can alter duration judgements in a manner consistent with a "speeding up" of an internal clock. In Experiments 1 and 2 we investigated whether repetitive vibrotactile stimulation in the form of vibration trains would also alter duration judgements of either vibrotactile stimuli or visual stimuli. Participants gave verbal estimates of the duration of vibrotactile and visual stimuli that were preceded either by five seconds of 5-Hz vibration trains, or, by a five-second period of no vibrotactile stimulation, the end of which was signalled by a single vibration pulse (control condition). The results showed that durations were overestimated in the vibrotactile train conditions relative to the control condition; however, the effects were not multiplicative (did not increase with increasing stimulus duration) and as such were not consistent with a speeding up of the internal clock, but rather with an additive attentional effect. An additional finding was that the slope of the vibrotactile psychometric (control condition) function was not significantly different from that of the visual (control condition) function, which replicates a finding from a previous cross-modal comparison of timing.
Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry
2017-07-01
Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.
Neural responses to salient visual stimuli.
Morris, J S; Friston, K J; Dolan, R J
1997-01-01
The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546
Pillai, Roshni; Yathiraj, Asha
2017-09-01
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Chen, Yi-Chuan; Spence, Charles
2018-04-30
We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cortical Integration of Audio-Visual Information
Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.
2013-01-01
We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442
Visual evoked potential assessment of the effects of glaucoma on visual subsystems.
Greenstein, V C; Seliger, S; Zemon, V; Ritch, R
1998-06-01
The purpose of this study is to test the hypothesis that glaucoma leads to selective deficits in parallel pathways or channels. Sweep VEPs were obtained to isolated-check stimuli that were modulated sinusoidally in either isoluminant chromatic contrast or in positive and negative luminance contrast. Response functions were obtained from 14 control subjects, 15 patients with open-angle glaucoma, and seven glaucoma suspects. For all three groups of subjects we found characteristic differences between the VEP response functions to isoluminant chromatic contrast stimuli and to luminance contrast stimuli. The isoluminant chromatic stimulus conditions appeared to favor activity of the P-pathway, whereas the luminance contrast stimuli at low depths of modulation favored M-pathway activity. VEP responses for patients with OAG were significantly reduced for chromatic contrast and luminance contrast conditions, whereas VEP responses for glaucoma suspects were significantly reduced only for the 15-Hz positive luminance contrast condition. Our results suggest that both M- and P-pathways are affected by glaucoma.
Evidence for arousal-biased competition in perceptual learning.
Lee, Tae-Ho; Itti, Laurent; Mather, Mara
2012-01-01
Arousal-biased competition theory predicts that arousal biases competition in favor of perceptually salient stimuli and against non-salient stimuli (Mather and Sutherland, 2011). The current study tested this hypothesis by having observers complete many trials in a visual search task in which the target either always was salient (a 55° tilted line among 80° distractors) or non-salient (a 55° tilted line among 50° distractors). Each participant completed one session in an emotional condition, in which visual search trials were preceded by negative arousing images, and one session in a non-emotional condition, in which the arousing images were replaced with neutral images (with session order counterbalanced). Test trials in which the target line had to be selected from among a set of lines with different tilts revealed that the emotional condition enhanced identification of the salient target line tilt but impaired identification of the non-salient target line tilt. Thus, arousal enhanced perceptual learning of salient stimuli but impaired perceptual learning of non-salient stimuli.
Evidence for Arousal-Biased Competition in Perceptual Learning
Lee, Tae-Ho; Itti, Laurent; Mather, Mara
2012-01-01
Arousal-biased competition theory predicts that arousal biases competition in favor of perceptually salient stimuli and against non-salient stimuli (Mather and Sutherland, 2011). The current study tested this hypothesis by having observers complete many trials in a visual search task in which the target either always was salient (a 55° tilted line among 80° distractors) or non-salient (a 55° tilted line among 50° distractors). Each participant completed one session in an emotional condition, in which visual search trials were preceded by negative arousing images, and one session in a non-emotional condition, in which the arousing images were replaced with neutral images (with session order counterbalanced). Test trials in which the target line had to be selected from among a set of lines with different tilts revealed that the emotional condition enhanced identification of the salient target line tilt but impaired identification of the non-salient target line tilt. Thus, arousal enhanced perceptual learning of salient stimuli but impaired perceptual learning of non-salient stimuli. PMID:22833729
Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.
Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto
2005-01-03
A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.
De Lillo, Carlo; Spinozzi, Giovanna; Truppa, Valentina; Naylor, Donna M
2005-05-01
Results obtained with preschool children (Homo sapiens) were compared with results previously obtained from capuchin monkeys (Cebus apella) in matching-to-sample tasks featuring hierarchical visual stimuli. In Experiment 1, monkeys, in contrast with children, showed an advantage in matching the stimuli on the basis of their local features. These results were replicated in a 2nd experiment in which control trials enabled the authors to rule out that children used spurious cues to solve the matching task. In a 3rd experiment featuring conditions in which the density of the stimuli was manipulated, monkeys' accuracy in the processing of the global shape of the stimuli was negatively affected by the separation of the local elements, whereas children's performance was robust across testing conditions. Children's response latencies revealed a global precedence in the 2nd and 3rd experiments. These results show differences in the processing of hierarchical stimuli by humans and monkeys that emerge early during childhood. 2005 APA, all rights reserved
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.
Kokubu, Masahiro; Ando, Soichi; Oda, Shingo
2018-01-18
The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Left hemispheric advantage for numerical abilities in the bottlenose dolphin.
Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur
2005-02-28
In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.
Kattoor, Joswin; Gizewski, Elke R.; Kotsis, Vassilios; Benson, Sven; Gramsch, Carolin; Theysohn, Nina; Maderwald, Stefan; Forsting, Michael; Schedlowski, Manfred; Elsenbruch, Sigrid
2013-01-01
Fear conditioning is relevant for elucidating the pathophysiology of anxiety, but may also be useful in the context of chronic pain syndromes which often overlap with anxiety. Thus far, no fear conditioning studies have employed aversive visceral stimuli from the lower gastrointestinal tract. Therefore, we implemented a fear conditioning paradigm to analyze the conditioned response to rectal pain stimuli using fMRI during associative learning, extinction and reinstatement. In N = 21 healthy humans, visual conditioned stimuli (CS+) were paired with painful rectal distensions as unconditioned stimuli (US), while different visual stimuli (CS−) were presented without US. During extinction, all CSs were presented without US, whereas during reinstatement, a single, unpaired US was presented. In region-of-interest analyses, conditioned anticipatory neural activation was assessed along with perceived CS-US contingency and CS unpleasantness. Fear conditioning resulted in significant contingency awareness and valence change, i.e., learned unpleasantness of a previously neutral stimulus. This was paralleled by anticipatory activation of the anterior cingulate cortex, the somatosensory cortex and precuneus (all during early acquisition) and the amygdala (late acquisition) in response to the CS+. During extinction, anticipatory activation of the dorsolateral prefrontal cortex to the CS− was observed. In the reinstatement phase, a tendency for parahippocampal activation was found. Fear conditioning with rectal pain stimuli is feasible and leads to learned unpleasantness of previously neutral stimuli. Within the brain, conditioned anticipatory activations are seen in core areas of the central fear network including the amygdala and the anterior cingulate cortex. During extinction, conditioned responses quickly disappear, and learning of new predictive cue properties is paralleled by prefrontal activation. A tendency for parahippocampal activation during reinstatement could indicate a reactivation of the old memory trace. Together, these findings contribute to our understanding of aversive visceral learning and memory processes relevant to the pathophysiology of chronic abdominal pain. PMID:23468832
Patt, Joseph M.; Stockton, Dara; Meikle, William G.; Sétamou, Mamoudou; Mafra-Neto, Agenor; Adamczyk, John J.
2014-01-01
Asian citrus psyllid (Diaphorina citri) transmits Huanglongbing, a devastating disease that threatens citrus trees worldwide. A better understanding of the psyllid’s host-plant selection process may lead to the development of more efficient means of monitoring it and predicting its movements. Since behavioral adaptations, such as associative learning, may facilitate recognition of suitable host-plants, we examined whether adult D. citri could be conditioned to visual and chemosensory stimuli from host and non-host-plant sources. Response was measured as the frequency of salivary sheaths, the residue of psyllid probing activity, in a line of emulsified wax on the surface of a test arena. The psyllids displayed both appetitive and aversive conditioning to two different chemosensory stimuli. They could also be conditioned to recognize a blue-colored probing substrate and their response to neutral visual cues was enhanced by chemosensory stimuli. Conditioned psyllids were sensitive to the proportion of chemosensory components present in binary mixtures. Naïve psyllids displayed strong to moderate innate biases to several of the test compounds. While innate responses are probably the psyllid’s primary behavioral mechanism for selecting host-plants, conditioning may enhance its ability to select host-plants during seasonal transitions and dispersal. PMID:26462949
Patt, Joseph M; Stockton, Dara; Meikle, William G; Sétamou, Mamoudou; Mafra-Neto, Agenor; Adamczyk, John J
2014-11-19
Asian citrus psyllid (Diaphorina citri) transmits Huanglongbing, a devastating disease that threatens citrus trees worldwide. A better understanding of the psyllid's host-plant selection process may lead to the development of more efficient means of monitoring it and predicting its movements. Since behavioral adaptations, such as associative learning, may facilitate recognition of suitable host-plants, we examined whether adult D. citri could be conditioned to visual and chemosensory stimuli from host and non-host-plant sources. Response was measured as the frequency of salivary sheaths, the residue of psyllid probing activity, in a line of emulsified wax on the surface of a test arena. The psyllids displayed both appetitive and aversive conditioning to two different chemosensory stimuli. They could also be conditioned to recognize a blue-colored probing substrate and their response to neutral visual cues was enhanced by chemosensory stimuli. Conditioned psyllids were sensitive to the proportion of chemosensory components present in binary mixtures. Naïve psyllids displayed strong to moderate innate biases to several of the test compounds. While innate responses are probably the psyllid's primary behavioral mechanism for selecting host-plants, conditioning may enhance its ability to select host-plants during seasonal transitions and dispersal.
Neural mechanism for sensing fast motion in dim light.
Li, Ran; Wang, Yi
2013-11-07
Luminance is a fundamental property of visual scenes. A population of neurons in primary visual cortex (V1) is sensitive to uniform luminance. In natural vision, however, the retinal image often changes rapidly. Consequently the luminance signals visual cells receive are transiently varying. How V1 neurons respond to such luminance changes is unknown. By applying large static uniform stimuli or grating stimuli altering at 25 Hz that resemble the rapid luminance changes in the environment, we show that approximately 40% V1 cells responded to rapid luminance changes of uniform stimuli. Most of them strongly preferred luminance decrements. Importantly, when tested with drifting gratings, the preferred speeds of these cells were significantly higher than cells responsive to static grating stimuli but not to uniform stimuli. This responsiveness can be accounted for by the preferences for low spatial frequencies and high temporal frequencies. These luminance-sensitive cells subserve the detection of fast motion under the conditions of dim illumination.
Peripheral visual response time and visual display layout
NASA Technical Reports Server (NTRS)
Haines, R. F.
1974-01-01
Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.
Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
2014-01-01
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Effect of higher frequency on the classification of steady-state visual evoked potentials
NASA Astrophysics Data System (ADS)
Won, Dong-Ok; Hwang, Han-Jeong; Dähne, Sven; Müller, Klaus-Robert; Lee, Seong-Whan
2016-02-01
Objective. Most existing brain-computer interface (BCI) designs based on steady-state visual evoked potentials (SSVEPs) primarily use low frequency visual stimuli (e.g., <20 Hz) to elicit relatively high SSVEP amplitudes. While low frequency stimuli could evoke photosensitivity-based epileptic seizures, high frequency stimuli generally show less visual fatigue and no stimulus-related seizures. The fundamental objective of this study was to investigate the effect of stimulation frequency and duty-cycle on the usability of an SSVEP-based BCI system. Approach. We developed an SSVEP-based BCI speller using multiple LEDs flickering with low frequencies (6-14.9 Hz) with a duty-cycle of 50%, or higher frequencies (26-34.7 Hz) with duty-cycles of 50%, 60%, and 70%. The four different experimental conditions were tested with 26 subjects in order to investigate the impact of stimulation frequency and duty-cycle on performance and visual fatigue, and evaluated with a questionnaire survey. Resting state alpha powers were utilized to interpret our results from the neurophysiological point of view. Main results. The stimulation method employing higher frequencies not only showed less visual fatigue, but it also showed higher and more stable classification performance compared to that employing relatively lower frequencies. Different duty-cycles in the higher frequency stimulation conditions did not significantly affect visual fatigue, but a duty-cycle of 50% was a better choice with respect to performance. The performance of the higher frequency stimulation method was also less susceptible to resting state alpha powers, while that of the lower frequency stimulation method was negatively correlated with alpha powers. Significance. These results suggest that the use of higher frequency visual stimuli is more beneficial for performance improvement and stability as time passes when developing practical SSVEP-based BCI applications.
Effect of higher frequency on the classification of steady-state visual evoked potentials.
Won, Dong-Ok; Hwang, Han-Jeong; Dähne, Sven; Müller, Klaus-Robert; Lee, Seong-Whan
2016-02-01
Most existing brain-computer interface (BCI) designs based on steady-state visual evoked potentials (SSVEPs) primarily use low frequency visual stimuli (e.g., <20 Hz) to elicit relatively high SSVEP amplitudes. While low frequency stimuli could evoke photosensitivity-based epileptic seizures, high frequency stimuli generally show less visual fatigue and no stimulus-related seizures. The fundamental objective of this study was to investigate the effect of stimulation frequency and duty-cycle on the usability of an SSVEP-based BCI system. We developed an SSVEP-based BCI speller using multiple LEDs flickering with low frequencies (6-14.9 Hz) with a duty-cycle of 50%, or higher frequencies (26-34.7 Hz) with duty-cycles of 50%, 60%, and 70%. The four different experimental conditions were tested with 26 subjects in order to investigate the impact of stimulation frequency and duty-cycle on performance and visual fatigue, and evaluated with a questionnaire survey. Resting state alpha powers were utilized to interpret our results from the neurophysiological point of view. The stimulation method employing higher frequencies not only showed less visual fatigue, but it also showed higher and more stable classification performance compared to that employing relatively lower frequencies. Different duty-cycles in the higher frequency stimulation conditions did not significantly affect visual fatigue, but a duty-cycle of 50% was a better choice with respect to performance. The performance of the higher frequency stimulation method was also less susceptible to resting state alpha powers, while that of the lower frequency stimulation method was negatively correlated with alpha powers. These results suggest that the use of higher frequency visual stimuli is more beneficial for performance improvement and stability as time passes when developing practical SSVEP-based BCI applications.
Emotional conditioning to masked stimuli and modulation of visuospatial attention.
Beaver, John D; Mogg, Karin; Bradley, Brendan P
2005-03-01
Two studies investigated the effects of conditioning to masked stimuli on visuospatial attention. During the conditioning phase, masked snakes and spiders were paired with a burst of white noise, or paired with an innocuous tone, in the conditioned stimulus (CS)+ and CS- conditions, respectively. Attentional allocation to the CSs was then assessed with a visual probe task, in which the CSs were presented unmasked (Experiment 1) or both unmasked and masked (Experiment 2), together with fear-irrelevant control stimuli (flowers and mushrooms). In Experiment 1, participants preferentially allocated attention to CS+ relative to control stimuli. Experiment 2 suggested that this attentional bias depended on the perceived aversiveness of the unconditioned stimulus and did not require conscious recognition of the CSs during both acquisition and expression. Copyright 2005 APA, all rights reserved.
Parkington, Karisa B; Clements, Rebecca J; Landry, Oriane; Chouinard, Philippe A
2015-10-01
We examined how performance on an associative learning task changes in a sample of undergraduate students as a function of their autism-spectrum quotient (AQ) score. The participants, without any prior knowledge of the Japanese language, learned to associate hiragana characters with button responses. In the novel condition, 50 participants learned visual-motor associations without any prior exposure to the stimuli's visual attributes. In the familiar condition, a different set of 50 participants completed a session in which they first became familiar with the stimuli's visual appearance prior to completing the visual-motor association learning task. Participants with higher AQ scores had a clear advantage in the novel condition; the amount of training required reaching learning criterion correlated negatively with AQ. In contrast, participants with lower AQ scores had a clear advantage in the familiar condition; the amount of training required to reach learning criterion correlated positively with AQ. An examination of how each of the AQ subscales correlated with these learning patterns revealed that abilities in visual discrimination-which is known to depend on the visual ventral-stream system-may have afforded an advantage in the novel condition for the participants with the higher AQ scores, whereas abilities in attention switching-which are known to require mechanisms in the prefrontal cortex-may have afforded an advantage in the familiar condition for the participants with the lower AQ scores.
A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Ferreria, Valeria Y.; Ulrich, Daniel; Kilic, Tayfun; Chatziliadis, Xenofon; Blankertz, Benjamin; Treder, Matthias
2015-10-01
A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.
Shades of yellow: interactive effects of visual and odour cues in a pest beetle
Stevenson, Philip C.; Belmain, Steven R.
2016-01-01
Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
2011-01-01
Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239
Aging and goal-directed emotional attention: distraction reverses emotional biases.
Knight, Marisa; Seymour, Travis L; Gaunt, Joshua T; Baker, Christopher; Nesmith, Kathryn; Mather, Mara
2007-11-01
Previous findings reveal that older adults favor positive over negative stimuli in both memory and attention (for a review, see Mather & Carstensen, 2005). This study used eye tracking to investigate the role of cognitive control in older adults' selective visual attention. Younger and older adults viewed emotional-neutral and emotional-emotional pairs of faces and pictures while their gaze patterns were recorded under full or divided attention conditions. Replicating previous eye-tracking findings, older adults allocated less of their visual attention to negative stimuli in negative-neutral stimulus pairings in the full attention condition than younger adults did. However, as predicted by a cognitive-control-based account of the positivity effect in older adults' information processing tendencies (Mather & Knight, 2005), older adults' tendency to avoid negative stimuli was reversed in the divided attention condition. Compared with younger adults, older adults' limited attentional resources were more likely to be drawn to negative stimuli when they were distracted. These findings indicate that emotional goals can have unintended consequences when cognitive control mechanisms are not fully available.
1993-08-01
presented emotional stimuli than for subliminally presented neutral stimuli. Emotional stimuli consisted of sexually charged photographs, and the neutral...behavior. In addition to research using visual stimuli, some 13 studies have been conducted using subliminal (masked by 40 dB white noise) auditory ...deactivating suggestions masked by a 40-dB white noise signal. For the deactivating subliminal auditory messages, suggestions of heaviness and warmth
Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E
2015-01-01
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.
Coordinates of Human Visual and Inertial Heading Perception.
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Coordinates of Human Visual and Inertial Heading Perception
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Emotion based attentional priority for storage in visual short-term memory.
Simione, Luca; Calabrese, Lucia; Marucci, Francesco S; Belardinelli, Marta Olivetti; Raffone, Antonino; Maratos, Frances A
2014-01-01
A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as 'emotional superiority'). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands.
Shock-like haemodynamic responses induced in the primary visual cortex by moving visual stimuli
Robinson, P. A.
2016-01-01
It is shown that recently discovered haemodynamic waves can form shock-like fronts when driven by stimuli that excite the cortex in a patch that moves faster than the haemodynamic wave velocity. If stimuli are chosen in order to induce shock-like behaviour, the resulting blood oxygen level-dependent (BOLD) response is enhanced, thereby improving the signal to noise ratio of measurements made with functional magnetic resonance imaging. A spatio-temporal haemodynamic model is extended to calculate the BOLD response and determine the main properties of waves induced by moving stimuli. From this, the optimal conditions for stimulating shock-like responses are determined, and ways of inducing these responses in experiments are demonstrated in a pilot study. PMID:27974572
Matching voice and face identity from static images.
Mavica, Lauren W; Barenholtz, Elan
2013-04-01
Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.
The dynamic-stimulus advantage of visual symmetry perception.
Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko
2008-09-01
It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.
Magnetic stimulation of visual cortex impairs perceptual learning.
Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio
2016-12-01
The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Correa-Jaraba, Kenia S.; Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2016-01-01
The main aim of this study was to examine the effects of aging on event-related brain potentials (ERPs) associated with the automatic detection of unattended infrequent deviant and novel auditory stimuli (Mismatch Negativity, MMN) and with the orienting to these stimuli (P3a component), as well as the effects on ERPs associated with reorienting to relevant visual stimuli (Reorienting Negativity, RON). Participants were divided into three age groups: (1) Young: 21–29 years old; (2) Middle-aged: 51–64 years old; and (3) Old: 65–84 years old. They performed an auditory-visual distraction-attention task in which they were asked to attend to visual stimuli (Go, NoGo) and to ignore auditory stimuli (S: standard, D: deviant, N: novel). Reaction times (RTs) to Go visual stimuli were longer in old and middle-aged than in young participants. In addition, in all three age groups, longer RTs were found when Go visual stimuli were preceded by novel relative to deviant and standard auditory stimuli, indicating a distraction effect provoked by novel stimuli. ERP components were identified in the Novel minus Standard (N-S) and Deviant minus Standard (D-S) difference waveforms. In the N-S condition, MMN latency was significantly longer in middle-aged and old participants than in young participants, indicating a slowing of automatic detection of changes. The following results were observed in both difference waveforms: (1) the P3a component comprised two consecutive phases in all three age groups—an early-P3a (e-P3a) that may reflect the orienting response toward the irrelevant stimulation and a late-P3a (l-P3a) that may be a correlate of subsequent evaluation of the infrequent unexpected novel or deviant stimuli; (2) the e-P3a, l-P3a, and RON latencies were significantly longer in the Middle-aged and Old groups than in the Young group, indicating delay in the orienting response to and the subsequent evaluation of unattended auditory stimuli, and in the reorienting of attention to relevant (Go) visual stimuli, respectively; and (3) a significantly smaller e-P3a amplitude in Middle-aged and Old groups, indicating a deficit in the orienting response to irrelevant novel and deviant auditory stimuli. PMID:27065004
Visual grouping under isoluminant condition: impact of mental fatigue
NASA Astrophysics Data System (ADS)
Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta
2016-09-01
Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.
Visual-somatosensory integration in aging: Does stimulus location really matter?
MAHONEY, JEANNETTE R.; WANG, CUILING; DUMAS, KRISTINA; HOLTZER, ROEE
2014-01-01
Individuals are constantly bombarded by sensory stimuli across multiple modalities that must be integrated efficiently. Multisensory integration (MSI) is said to be governed by stimulus properties including space, time, and magnitude. While there is a paucity of research detailing MSI in aging, we have demonstrated that older adults reveal the greatest reaction time (RT) benefi t when presented with simultaneous visual-somatosensory (VS) stimuli. To our knowledge, the differential RT benefit of visual and somatosensory stimuli presented within and across spatial hemifields has not been investigated in aging. Eighteen older adults (Mean = 74 years; 11 female), who were determined to be non-demented and without medical or psychiatric conditions that may affect their performance, participated in this study. Participants received eight randomly presented stimulus conditions (four unisensory and four multisensory) and were instructed to make speeded foot-pedal responses as soon as they detected any stimulation, regardless of stimulus type and location of unisensory inputs. Results from a linear mixed effect model, adjusted for speed of processing and other covariates, revealed that RTs to all multisensory pairings were significantly faster than those elicited to averaged constituent unisensory conditions (p < 0.01). Similarly, race model violation did not differ based on unisensory spatial location (p = 0.41). In summary, older adults demonstrate significant VS multisensory RT effects to stimuli both within and across spatial hemifields. PMID:24698637
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
Color categories affect pre-attentive color perception.
Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna
2010-10-01
Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Conditioned suppression, punishment, and aversion
NASA Technical Reports Server (NTRS)
Orme-Johnson, D. W.; Yarczower, M.
1974-01-01
The aversive action of visual stimuli was studied in two groups of pigeons which received response-contingent or noncontingent electric shocks in cages with translucent response keys. Presentation of grain for 3 sec, contingent on key pecking, was the visual stimulus associated with conditioned punishment or suppression. The responses of the pigeons in three different experiments are compared.
Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi
2008-02-15
How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.
Gamma band activity and the P3 reflect post-perceptual processes, not visual awareness
Pitts, Michael A.; Padwal, Jennifer; Fennelly, Daniel; Martínez, Antígona; Hillyard, Steven A.
2014-01-01
A primary goal in cognitive neuroscience is to identify neural correlates of conscious perception (NCC). By contrasting conditions in which subjects are aware versus unaware of identical visual stimuli, a number of candidate NCCs have emerged, among them induced gamma band activity in the EEG and the P3 event-related potential. In most previous studies, however, the critical stimuli were always directly relevant to the subjects’ task, such that aware versus unaware contrasts may well have included differences in post-perceptual processing in addition to differences in conscious perception per se. Here, in a series of EEG experiments, visual awareness and task relevance were manipulated independently. Induced gamma activity and the P3 were absent for task-irrelevant stimuli regardless of whether subjects were aware of such stimuli. For task-relevant stimuli, gamma and the P3 were robust and dissociable, indicating that each reflects distinct post-perceptual processes necessary for carrying-out the task but not for consciously perceiving the stimuli. Overall, this pattern of results challenges a number of previous proposals linking gamma band activity and the P3 to conscious perception. PMID:25063731
Context generalization in Drosophila visual learning requires the mushroom bodies
NASA Astrophysics Data System (ADS)
Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin
1999-08-01
The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Extraction of Information From Visual Persistence
ERIC Educational Resources Information Center
Erwin, Donald E.
1976-01-01
This research sought to distinguish among three concepts of visual persistence by substituting the physical presence of the target stimulus while simultaneously inhibiting the formation of a persisting representation. Reportability of information about the stimuli was compared to a condition in which visual persistence was allowed to fully develop…
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Mills, Monique T.
2015-01-01
Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…
Audio-visual synchrony and feature-selective attention co-amplify early visual processing.
Keitel, Christian; Müller, Matthias M
2016-05-01
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Dorsal hippocampus is necessary for visual categorization in rats.
Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H
2018-02-23
The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.
Miller, J
1991-03-01
When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.
Reaching to virtual targets: The oblique effect reloaded in 3-D.
Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos
2017-02-20
Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko
2008-11-25
We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.
Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield
Gayet, Surya; Van der Stigchel, Stefan; Paffen, Chris L. E.
2014-01-01
Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis. PMID:24904476
Raymond, Jane E; O'Brien, Jennifer L
2009-08-01
Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Estimation of stereovision in conditions of blurring simulation
NASA Astrophysics Data System (ADS)
Krumina, Gunta; Ozolinsh, Maris; Lacis, Ivazs; Lyakhovetskii, Vsevolod
2005-08-01
The aim of this study was to evaluate the simulation of eye pathologies, such as amblyopia and cataracts, to estimate the stereovision in artificial conditions, and to compare the results on the stereothreshold obtained in artificial and real- pathologic conditions. Characteristic of the above-mentioned real-life forms of a reduced vision is a blurred image in one of the eyes. The blurring was simulated by (i) defocusing, (ii) blurred stimuli on the screen, and (iii) occluding of an eye with PLZT or PDLC plates. When comparing the methods, two parameters were used: the subject's visual acuity and the modulation depth of the image. The eye occluder method appeared to systematically provide higher stereothreshold values than the rest of the methods. The PLZT and PDLC plates scattered more in the blue and decreased the contrast of the stimuli when the blurring degree was increased. In the eye occluder method, the stereothreshold increased faster than in the defocusation and monitor stimuli methods when the visual acuity difference was higher than 0.4. It has been shown that the PLZT and PDLC plates are good optical phantoms for the simulation of a cataract, while the defocusation and monitor stimuli methods are more suitable for amblyopia.
Exploring biased attention towards body-related stimuli and its relationship with body awareness.
Salvato, Gerardo; De Maio, Gabriele; Bottini, Gabriella
2017-12-08
Stimuli of great social relevance exogenously capture attention. Here we explored the impact of body-related stimuli on endogenous attention. Additionally, we investigate the influence of internal states on biased attention towards this class of stimuli. Participants were presented with a body, face, or chair cue to hold in memory (Memory task) or to merely attend (Priming task) and, subsequently, they were asked to find a circle in an unrelated visual search task. In the valid condition, the circle was flanked by the cue. In the invalid condition, the pre-cued picture re-appeared flanking the distracter. In the neutral condition, the cue item did not re-appear in the search display. We found that although bodies and faces benefited from a general faster visual processing compared to chairs, holding them in memory did not produce any additional advantage on attention compared to when they are merely attended. Furthermore, face cues generated larger orienting effect compared to body and chairs cues in both Memory and Priming task. Importantly, results showed that individual sensitivity to internal bodily responses predicted the magnitude of the memory-based orienting of attention to bodies, shedding new light on the relationship between body awareness and visuo-spatial attention.
Effect of attentional load on audiovisual speech perception: evidence from ERPs.
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.
Potentiation of the early visual response to learned danger signals in adults and adolescents
Howsley, Philippa; Jordan, Jeff; Johnston, Pat
2015-01-01
The reinforcing effects of aversive outcomes on avoidance behaviour are well established. However, their influence on perceptual processes is less well explored, especially during the transition from adolescence to adulthood. Using electroencephalography, we examined whether learning to actively or passively avoid harm can modulate early visual responses in adolescents and adults. The task included two avoidance conditions, active and passive, where two different warning stimuli predicted the imminent, but avoidable, presentation of an aversive tone. To avoid the aversive outcome, participants had to learn to emit an action (active avoidance) for one of the warning stimuli and omit an action for the other (passive avoidance). Both adults and adolescents performed the task with a high degree of accuracy. For both adolescents and adults, increased N170 event-related potential amplitudes were found for both the active and the passive warning stimuli compared with control conditions. Moreover, the potentiation of the N170 to the warning stimuli was stable and long lasting. Developmental differences were also observed; adolescents showed greater potentiation of the N170 component to danger signals. These findings demonstrate, for the first time, that learned danger signals in an instrumental avoidance task can influence early visual sensory processes in both adults and adolescents. PMID:24652856
Perceptual load corresponds with factors known to influence visual search
Roper, Zachary J. J.; Cosman, Joshua D.; Vecera, Shaun P.
2014-01-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search. PMID:23398258
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
Conditioning with compound stimuli in Drosophila melanogaster in the flight simulator.
Brembs, B; Heisenberg, M
2001-08-01
Short-term memory in Drosophila melanogaster operant visual learning in the flight simulator is explored using patterns and colours as a compound stimulus. Presented together during training, the two stimuli accrue the same associative strength whether or not a prior training phase rendered one of the two stimuli a stronger predictor for the reinforcer than the other (no blocking). This result adds Drosophila to the list of other invertebrates that do not exhibit the robust vertebrate blocking phenomenon. Other forms of higher-order learning, however, were detected: a solid sensory preconditioning and a small second-order conditioning effect imply that associations between the two stimuli can be formed, even if the compound is not reinforced.
Subliminal and supraliminal processing of reward-related stimuli in anorexia nervosa.
Boehm, I; King, J A; Bernardoni, F; Geisler, D; Seidel, M; Ritschel, F; Goschke, T; Haynes, J-D; Roessner, V; Ehrlich, S
2018-04-01
Previous studies have highlighted the role of the brain reward and cognitive control systems in the etiology of anorexia nervosa (AN). In an attempt to disentangle the relative contribution of these systems to the disorder, we used functional magnetic resonance imaging (fMRI) to investigate hemodynamic responses to reward-related stimuli presented both subliminally and supraliminally in acutely underweight AN patients and age-matched healthy controls (HC). fMRI data were collected from a total of 35 AN patients and 35 HC, while they passively viewed subliminally and supraliminally presented streams of food, positive social, and neutral stimuli. Activation patterns of the group × stimulation condition × stimulus type interaction were interrogated to investigate potential group differences in processing different stimulus types under the two stimulation conditions. Moreover, changes in functional connectivity were investigated using generalized psychophysiological interaction analysis. AN patients showed a generally increased response to supraliminally presented stimuli in the inferior frontal junction (IFJ), but no alterations within the reward system. Increased activation during supraliminal stimulation with food stimuli was observed in the AN group in visual regions including superior occipital gyrus and the fusiform gyrus/parahippocampal gyrus. No group difference was found with respect to the subliminal stimulation condition and functional connectivity. Increased IFJ activation in AN during supraliminal stimulation may indicate hyperactive cognitive control, which resonates with clinical presentation of excessive self-control in AN patients. Increased activation to food stimuli in visual regions may be interpreted in light of an attentional food bias in AN.
Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume
2016-01-01
There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.
Psychophysiological responses to drug-associated stimuli in chronic heavy cannabis use.
Wölfling, Klaus; Flor, Herta; Grüsser, Sabine M
2008-02-01
Due to learning processes originally neutral stimuli become drug-associated and can activate an implicit drug memory, which leads to a conditioned arousing 'drug-seeking' state. This condition is accompanied by specific psychophysiological responses. The goal of the present study was the analysis of changes in cortical and peripheral reactivity to cannabis as well as alcohol-associated pictures compared with emotionally significant drug-unrelated and neutral pictures in long-term heavy cannabis users. Participants were 15 chronic heavy cannabis users and 15 healthy controls. Verbal reports as well as event-related potentials of the electroencephalogram and skin conductance responses were assessed in a cue-reactivity paradigm to determine the psychophysiological effects caused by drug-related visual stimulus material. The evaluation of self-reported craving and emotional processing showed that cannabis stimuli were perceived as more arousing and pleasant and elicited significantly more cannabis craving in cannabis users than in healthy controls. Cannabis users also demonstrated higher cannabis stimulus-induced arousal, as indicated by significantly increased skin conductance and a larger late positivity of the visual event-related brain potential. These findings support the assumption that drug-associated stimuli acquire increased incentive salience in addiction history and induce conditioned physiological patterns, which lead to craving and potentially to drug intake. The potency of visual drug-associated cues to capture attention and to activate drug-specific memory traces and accompanying physiological symptoms embedded in a cycle of abstinence and relapse--even in a 'so-called' soft drug--was assessed for the first time.
Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.
2016-01-01
The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models. PMID:27536226
Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R
2016-01-01
The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
Brain-computer interface on the basis of EEG system Encephalan
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir; Badarin, Artem; Nedaivozov, Vladimir; Kirsanov, Daniil; Hramov, Alexander
2018-04-01
We have proposed brain-computer interface (BCI) for the estimation of the brain response on the presented visual tasks. Proposed BCI is based on the EEG recorder Encephalan-EEGR-19/26 (Medicom MTD, Russia) supplemented by a special home-made developed acquisition software. BCI is tested during experimental session while subject is perceiving the bistable visual stimuli and classifying them according to the interpretation. We have subjected the participant to the different external conditions and observed the significant decrease in the response, associated with the perceiving the bistable visual stimuli, during the presence of distraction. Based on the obtained results we have proposed possibility to use of BCI for estimation of the human alertness during solving the tasks required substantial visual attention.
Prediction and Uncertainty in Human Pavlovian to Instrumental Transfer
ERIC Educational Resources Information Center
Trick, Leanne; Hogarth, Lee; Duka, Theodora
2011-01-01
Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively.…
Contextual Control by Function and Form of Transfer of Functions
ERIC Educational Resources Information Center
Perkins, David R.; Dougher, Michael J.; Greenway, David E.
2007-01-01
This study investigated conditions leading to contextual control by stimulus topography over transfer of functions. Three 4-member stimulus equivalence classes, each consisting of four (A, B, C, D) topographically distinct visual stimuli, were established for 5 college students. Across classes, designated A stimuli were open-ended linear figures,…
Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.
Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk
2007-10-01
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.
Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events
Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel
2013-01-01
The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Affective learning modulates spatial competition during low-load attentional conditions.
Lim, Seung-Lark; Padmala, Srikanth; Pessoa, Luiz
2008-04-01
It has been hypothesized that the amygdala mediates the processing advantage of emotional items. In the present study, we employed functional magnetic resonance imaging (fMRI) to investigate how fear conditioning affected the visual processing of task-irrelevant faces. We hypothesized that faces previously paired with shock (threat faces) would more effectively vie for processing resources during conditions involving spatial competition. To investigate this question, following conditioning, participants performed a letter-detection task on an array of letters that was superimposed on task-irrelevant faces. Attentional resources were manipulated by having participants perform an easy or a difficult search task. Our findings revealed that threat fearful faces evoked stronger responses in the amygdala and fusiform gyrus relative to safe fearful faces during low-load attentional conditions, but not during high-load conditions. Consistent with the increased processing of shock-paired stimuli during the low-load condition, such stimuli exhibited increased behavioral priming and fMRI repetition effects relative to unpaired faces during a subsequent implicit-memory task. Overall, our results suggest a competition model in which affective significance signals from the amygdala may constitute a key modulatory factor determining the neural fate of visual stimuli. In addition, it appears that such competitive advantage is only evident when sufficient processing resources are available to process the affective stimulus.
A simple automated system for appetitive conditioning of zebrafish in their home tanks.
Doyle, Jillian M; Merovitch, Neil; Wyeth, Russell C; Stoyek, Matthew R; Schmidt, Michael; Wilfart, Florentin; Fine, Alan; Croll, Roger P
2017-01-15
We describe here an automated apparatus that permits rapid conditioning paradigms for zebrafish. Arduino microprocessors were used to control the delivery of auditory or visual stimuli to groups of adult or juvenile zebrafish in their home tanks in a conventional zebrafish facility. An automatic feeder dispensed precise amounts of food immediately after the conditioned stimuli, or at variable delays for controls. Responses were recorded using inexpensive cameras, with the video sequences analysed with ImageJ or Matlab. Fish showed significant conditioned responses in as few as 5 trials, learning that the conditioned stimulus was a predictor of food presentation at the water surface and at the end of the tank where the food was dispensed. Memories of these conditioned associations persisted for at least 2days after training when fish were tested either as groups or as individuals. Control fish, for which the auditory or visual stimuli were specifically unpaired with food, showed no comparable responses. This simple, low-cost, automated system permits scalable conditioning of zebrafish with minimal human intervention, greatly reducing both variability and labour-intensiveness. It will be useful for studies of the neural basis of learning and memory, and for high-throughput screening of compounds modifying those processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Wunsch, Annabel; Philippot, Pierre; Plaghki, Léon
2003-03-01
The present experiment examined the possibility to change the sensory and/or the affective perception of thermal stimuli by an emotional associative learning procedure known to operate without participants' awareness (evaluative conditioning). In a mixed design, an aversive conditioning procedure was compared between subjects to an appetitive conditioning procedure. Both groups were also compared within-subject to a control condition (neutral conditioning). The aversive conditioning was induced by associating non-painful and painful thermal stimuli - delivered on the right forearm - with unpleasant slides. The appetitive conditioning consisted in an association between thermal stimuli - also delivered on the right forearm - and pleasant slides. The control condition consisted in an association between thermal stimuli - delivered for all participants on the left forearm - and neutral slides. The effects of the conditioning procedures on the sensory and affective dimensions were evaluated with visual analogue scale (VAS)-intensity and VAS-unpleasantness. Startle reflex was used as a physiological index of emotional valence disposition. Results confirmed that no participants were aware of the conditioning procedure. After unpleasant slides (aversive conditioning), non-painful and painful thermal stimuli were judged more intense and more unpleasant than when preceded by neutral slides (control condition) or pleasant slides (appetitive conditioning). Despite a strong correlation between the intensity and the unpleasantness scales, effects were weaker for the affective scale and, became statistically non-significant when VAS-intensity was used as covariate. This experiment shows that it is possible to modify the perception of intensity of thermal stimuli by a non-conscious learning procedure based on the transfer of the valence of the unconditioned stimuli (pleasant or unpleasant slides) towards the conditioned stimuli (non-painful and painful thermal stimuli). These results plead for a conception of pain as a conscious output of complex informational processes all of which are not accessible to participants' awareness. Mechanisms by which affective input may influence sensory experience and clinical implications of the present study are discussed.
Stein, Timo; Hebart, Martin N.; Sterzer, Philipp
2011-01-01
Until recently, it has been thought that under interocular suppression high-level visual processing is strongly inhibited if not abolished. With the development of continuous flash suppression (CFS), a variant of binocular rivalry, this notion has now been challenged by a number of reports showing that even high-level aspects of visual stimuli, such as familiarity, affect the time stimuli need to overcome CFS and emerge into awareness. In this “breaking continuous flash suppression” (b-CFS) paradigm, differential unconscious processing during suppression is inferred when (a) speeded detection responses to initially invisible stimuli differ, and (b) no comparable differences are found in non-rivalrous control conditions supposed to measure non-specific threshold differences between stimuli. The aim of the present study was to critically evaluate these assumptions. In six experiments we compared the detection of upright and inverted faces. We found that not only under CFS, but also in control conditions upright faces were detected faster and more accurately than inverted faces, although the effect was larger during CFS. However, reaction time (RT) distributions indicated critical differences between the CFS and the control condition. When RT distributions were matched, similar effect sizes were obtained in both conditions. Moreover, subjective ratings revealed that CFS and control conditions are not perceptually comparable. These findings cast doubt on the usefulness of non-rivalrous control conditions to rule out non-specific threshold differences as a cause of shorter detection latencies during CFS. Thus, at least in its present form, the b-CFS paradigm cannot provide unequivocal evidence for unconscious processing under interocular suppression. Nevertheless, our findings also demonstrate that the b-CFS paradigm can be fruitfully applied as a highly sensitive device to probe differences between stimuli in their potency to gain access to awareness. PMID:22194718
Shielding cognition from nociception with working memory.
Legrain, Valéry; Crombez, Geert; Plaghki, Léon; Mouraux, André
2013-01-01
Because pain often signals the occurrence of potential tissue damage, nociceptive stimuli have the capacity to capture attention and interfere with ongoing cognitive activities. Working memory is known to guide the orientation of attention by maintaining goal priorities active during the achievement of a task. This study investigated whether the cortical processing of nociceptive stimuli and their ability to capture attention are under the control of working memory. Event-related brain potentials (ERPs) were recorded while participants performed primary tasks on visual targets that required or did not require rehearsal in working memory (1-back vs 0-back conditions). The visual targets were shortly preceded by task-irrelevant tactile stimuli. Occasionally, in order to distract the participants, the tactile stimuli were replaced by novel nociceptive stimuli. In the 0-back conditions, task performance was disrupted by the occurrence of the nociceptive distracters, as reflected by the increased reaction times in trials with novel nociceptive distracters as compared to trials with standard tactile distracters. In the 1-back conditions, such a difference disappeared suggesting that attentional capture and task disruption induced by nociceptive distracters were suppressed by working memory, regardless of task demands. Most importantly, in the conditions involving working memory, the magnitude of nociceptive ERPs, including ERP components at early latency, were significantly reduced. This indicates that working memory is able to modulate the cortical processing of nociceptive input already at its earliest stages, and could explain why working memory reduces consequently ability of nociceptive stimuli to capture attention and disrupt performance of the primary task. It is concluded that protecting cognitive processing against pain interference is best guaranteed by keeping out of working memory pain-related information. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J
2017-09-01
Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).
Effect of eye position during human visual-vestibular integration of heading perception.
Crane, Benjamin T
2017-09-01
Visual and inertial stimuli provide heading discrimination cues. Integration of these multisensory stimuli has been demonstrated to depend on their relative reliability. However, the reference frame of visual stimuli is eye centered while inertia is head centered, and it remains unclear how these are reconciled with combined stimuli. Seven human subjects completed a heading discrimination task consisting of a 2-s translation with a peak velocity of 16 cm/s. Eye position was varied between 0° and ±25° left/right. Experiments were done with inertial motion, visual motion, or a combined visual-inertial motion. Visual motion coherence varied between 35% and 100%. Subjects reported whether their perceived heading was left or right of the midline in a forced-choice task. With the inertial stimulus the eye position had an effect such that the point of subjective equality (PSE) shifted 4.6 ± 2.4° in the gaze direction. With the visual stimulus the PSE shift was 10.2 ± 2.2° opposite the gaze direction, consistent with retinotopic coordinates. Thus with eccentric eye positions the perceived inertial and visual headings were offset ~15°. During the visual-inertial conditions the PSE varied consistently with the relative reliability of these stimuli such that at low visual coherence the PSE was similar to that of the inertial stimulus and at high coherence it was closer to the visual stimulus. On average, the inertial stimulus was weighted near Bayesian ideal predictions, but there was significant deviation from ideal in individual subjects. These findings support visual and inertial cue integration occurring in independent coordinate systems. NEW & NOTEWORTHY In multiple cortical areas visual heading is represented in retinotopic coordinates while inertial heading is in body coordinates. It remains unclear whether multisensory integration occurs in a common coordinate system. The experiments address this using a multisensory integration task with eccentric gaze positions making the effect of coordinate systems clear. The results indicate that the coordinate systems remain separate to the perceptual level and that during the multisensory task the perception depends on relative stimulus reliability. Copyright © 2017 the American Physiological Society.
Kreplin, Ute; Fairclough, Stephen H
2013-01-01
The contemplation of visual art requires attention to be directed to external stimulus properties and internally generated thoughts. It has been proposed that the medial rostral prefrontal cortex (rPFC; BA10) plays a role in the maintenance of attention on external stimuli whereas the lateral area of the rPFC is associated with the preservation of attention on internal cognitions. An alternative hypothesis associates activation of medial rPFC with internal cognitions related to the self during emotion regulation. The aim of the current study was to differentiate activation within rPFC using functional near infrared spectroscopy (fNIRS) during the viewing of visual art selected to induce positive and negative valence, which were viewed under two conditions: (1) emotional introspection and (2) external object identification. Thirty participants (15 female) were recruited. Sixteen pre-rated images that represented either positive or negative valence were selected from an existing database of visual art. In one condition, participants were directed to engage in emotional introspection during picture viewing. The second condition involved a spot-the-difference task where participants compared two almost identical images, a viewing strategy that directed attention to external properties of the stimuli. The analysis revealed a significant increase of oxygenated blood in the medial rPFC during viewing of positive images compared to negative images. This finding suggests that the rPFC is involved during positive evaluations of visual art that may be related to judgment of pleasantness or attraction. The fNIRS data revealed no significant main effect between the two viewing conditions, which seemed to indicate that the emotional impact of the stimuli remained unaffected by the two viewing conditions.
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liefting, Maartje; Hoedjes, Katja M; Lann, Cécile Le; Smid, Hans M; Ellers, Jacintha
2018-05-16
We are only starting to understand how variation in cognitive ability can result from local adaptations to environmental conditions. A major question in this regard is to what extent selection on cognitive ability in a specific context affects that ability in general through correlated evolution. To address this question we performed artificial selection on visual associative learning in female Nasonia vitripennis wasps. Using appetitive conditioning in which a visual stimulus was offered in association with a host reward, the ability to learn visual associations was enhanced within 10 generations of selection. To test for correlated evolution affecting this form of learning, the ability to readily form learned associations in females was also tested using an olfactory instead of a visual stimulus in the appetitive conditioning. Additionally, we assessed whether the improved associative learning ability was expressed across sexes by colour-conditioning males with a mating reward. Both females and males from the selected lines consistently demonstrated an increased associative learning ability compared to the control lines, independent of learning context or conditioned stimulus. No difference in relative volume of brain neuropils was detected between the selected and control lines. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin
2015-03-01
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effect of attentional load on audiovisual speech perception: evidence from ERPs
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922
How visual timing and form information affect speech and non-speech processing.
Kim, Jeesun; Davis, Chris
2014-10-01
Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Electrophysiological spatiotemporal dynamics during implicit visual threat processing.
DeLaRosa, Bambi L; Spence, Jeffrey S; Shakal, Scott K M; Motes, Michael A; Calley, Clifford S; Calley, Virginia I; Hart, John; Kraut, Michael A
2014-11-01
Numerous studies have found evidence for corticolimbic theta band electroencephalographic (EEG) oscillations in the neural processing of visual stimuli perceived as threatening. However, varying temporal and topographical patterns have emerged, possibly due to varying arousal levels of the stimuli. In addition, recent studies suggest neural oscillations in delta, theta, alpha, and beta-band frequencies play a functional role in information processing in the brain. This study implemented a data-driven PCA based analysis investigating the spatiotemporal dynamics of electroencephalographic delta, theta, alpha, and beta-band frequencies during an implicit visual threat processing task. While controlling for the arousal dimension (the intensity of emotional activation), we found several spatial and temporal differences for threatening compared to nonthreatening visual images. We detected an early posterior increase in theta power followed by a later frontal increase in theta power, greatest for the threatening condition. There was also a consistent left lateralized beta desynchronization for the threatening condition. Our results provide support for a dynamic corticolimbic network, with theta and beta band activity indexing processes pivotal in visual threat processing. Published by Elsevier Inc.
Hollingworth, Andrew; Hwang, Seongmin
2013-10-19
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection.
Selective attention to visual compound stimuli in squirrel monkeys (Saimiri sciureus).
Ploog, Bertram O
2011-05-01
Five squirrel monkeys served under a simultaneous discrimination paradigm with visual compound stimuli that allowed measurement of excitatory and inhibitory control exerted by individual stimulus components (form and luminance/"color"), which could not be presented in isolation (i.e., form could not be presented without color). After performance exceeded a criterion of 75% correct during training, unreinforced test trials with stimuli comprising recombined training stimulus components were interspersed while the overall reinforcement rate remained constant for training and testing. The training-testing series was then repeated with reversed reinforcement contingencies. The findings were that color acquired greater excitatory control than form under the original condition, that no such difference was found for the reversal condition or for inhibitory control under either condition, and that overall inhibitory control was less pronounced than excitatory control. The remarkably accurate performance throughout suggested that a forced 4-s delay between the stimulus presentation and the opportunity to respond was effective in reducing "impulsive" responding, which has implications for suppressing impulsive responding in children with autism and with attention deficit disorder. Copyright © 2011 Elsevier B.V. All rights reserved.
Perceptual load corresponds with factors known to influence visual search.
Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P
2013-10-01
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong
2017-02-01
Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.
Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie
2018-03-07
We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.
Conscious control over the content of unconscious cognition.
Kunde, Wilfried; Kiesel, Andrea; Hoffmann, Joachim
2003-06-01
Visual stimuli (primes) presented too briefly to be consciously identified can nevertheless affect responses to subsequent stimuli - an instance of unconscious cognition. There is a lively debate as to whether such priming effects originate from unconscious semantic processing of the primes or from reactivation of learned motor responses that conscious stimuli afford during preceding practice. In four experiments we demonstrate that unconscious stimuli owe their impact neither to automatic semantic categorization nor to memory traces of preceding stimulus-response episodes, but to their match with pre-specified cognitive action-trigger conditions. The intentional creation of such triggers allows actors to control the way unconscious stimuli bias their behaviour.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Cross-modal illusory conjunctions between vision and touch.
Cinel, Caterina; Humphreys, Glyn W; Poli, Riccardo
2002-10-01
Cross-modal illusory conjunctions (ICs) happen when, under conditions of divided attention, felt textures are reported as being seen or vice versa. Experiments provided evidence for these errors, demonstrated that ICs are more frequent if tactile and visual stimuli are in the same hemispace, and showed that ICs still occur under forced-choice conditions but do not occur when attention to the felt texture is increased. Cross-modal ICs were also found in a patient with parietal damage even with relatively long presentations of visual stimuli. The data are consistent with there being cross-modal integration of sensory information, with the modality of origin sometimes being misattributed when attention is constrained. The empirical conclusions from the experiments are supported by formal models.
Virtue, Sandra; Schutzenhofer, Michael; Tomkins, Blaine
2017-07-01
Although a left hemisphere advantage is usually evident during language processing, the right hemisphere is highly involved during the processing of weakly constrained inferences. However, currently little is known about how the emotional valence of environmental stimuli influences the hemispheric processing of these inferences. In the current study, participants read texts promoting either strongly or weakly constrained predictive inferences and performed a lexical decision task to inference-related targets presented to the left visual field-right hemisphere or the right visual field-left hemisphere. While reading these texts, participants either listened to dissonant music (i.e., the music condition) or did not listen to music (i.e., the no music condition). In the no music condition, the left hemisphere showed an advantage for strongly constrained inferences compared to weakly constrained inferences, whereas the right hemisphere showed high facilitation for both strongly and weakly constrained inferences. In the music condition, both hemispheres showed greater facilitation for strongly constrained inferences than for weakly constrained inferences. These results suggest that negatively valenced stimuli (such as dissonant music) selectively influences the right hemisphere's processing of weakly constrained inferences during reading.
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1976-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Structural imbalance promotes behavior analogous to aesthetic preference in domestic chicks.
Elliott, Mark A; Salva, Orsola Rosa; Mulcahy, Paul; Regolin, Lucia
2012-01-01
Visual images may be judged 'aesthetic' when their positioning appears imbalanced. An apparent imbalance may signify an as yet incomplete action or event requiring more detailed processing. As such it may refer to phylogenetically ancient stimulus-response mechanisms such as those mediating attentional deployment. We studied preferences for structural balance or imbalance in week-old domestic chicks (Gallus gallus), using a conditioning procedure to reinforce pecking at either "aligned" (balanced) or "misaligned" (imbalanced) training stimuli. A testing phase with novel balanced and imbalanced stimuli established whether chicks would retain their conditioned behavior or revert to chance responding. Whereas those trained on aligned stimuli were equally likely to choose aligned or misaligned stimuli, chicks trained on misaligned stimuli maintained the trained preference. Our results are consistent with the idea that the coding of structural imbalance is primary and even overrides classical conditioning. Generalized to the humans, these results suggest aesthetic judgments based upon structural imbalance may be based on evolutionarily ancient mechanisms, which are shared by different vertebrate species.
Dissociating verbal and nonverbal audiovisual object processing.
Hocking, Julia; Price, Cathy J
2009-02-01
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
Pleasant music improves visual attention in patients with unilateral neglect after stroke.
Chen, Mei-Ching; Tsai, Pei-Luen; Huang, Yu-Ting; Lin, Keh-Chung
2013-01-01
To investigate whether listening to pleasant music improves visual attention to and awareness of contralesional stimuli in patients with unilateral neglect after stroke. A within-subject design was used with 19 participants with unilateral neglect following a right hemisphere stroke. Participants were tested in three conditions (pleasant music, unpleasant music and white noise) within 1 week. All musical pieces were chosen by the participants. In each condition, participants were asked to complete three sub-tests of the Behavioural Inattention Test (the Star Cancellation Test, the Line Bisection Test and the Picture Scanning test) and a visual exploration task with everyday scenes. Eye movements in the visual exploration task were recorded simultaneously. Mood and arousal induced by different auditory stimuli were assessed using visual analogue scales, heart rate and galvanic skin response. Compared with unpleasant music and white noise, participants rated their moods as more positive and arousal as higher with pleasant music, but also showed significant improvement on all tasks and eye movement data, except the Line Bisection Test. The findings suggest that pleasant music can improve visual attention in patients with unilateral neglect after stroke. Additional research using randomized controlled trials is required to validate these findings.
Visual and Spatial Mental Imagery: Dissociable Systems of Representation.
1987-08-07
identification of visual stimuli (the visual agnosias ) could occur independently of impairr-’e"s in their spatial localization (Potzl. 1928: Lange. 1936) Patients...of brain damage that is generally associated with visual "PIre - i’ e/ e~~ :S~ OF Visual and Spatial Imagery 1i agnosia . Details of L.H.’s medical...This approach is nowhere more called for than in the study of subjects with visual object agnosia . a condition that is both extremely rare and somewhat
Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume
2016-01-01
There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS. PMID:27574507
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Visual-Auditory Integration during Speech Imitation in Autism
ERIC Educational Resources Information Center
Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1973-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel
2017-01-01
Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
A sLORETA study for gaze-independent BCI speller.
Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming
2017-07-01
EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.
Auditory and visual spatial impression: Recent studies of three auditoria
NASA Astrophysics Data System (ADS)
Nguyen, Andy; Cabrera, Densil
2004-10-01
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.
Nava, Elena; Grassi, Massimo; Turati, Chiara
2016-01-01
Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.
Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.
Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc
2017-09-01
Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.
Mismatch Negativity with Visual-only and Audiovisual Speech
Ponton, Curtis W.; Bernstein, Lynne E.; Auer, Edward T.
2009-01-01
The functional organization of cortical speech processing is thought to be hierarchical, increasing in complexity and proceeding from primary sensory areas centrifugally. The current study used the mismatch negativity (MMN) obtained with electrophysiology (EEG) to investigate the early latency period of visual speech processing under both visual-only (VO) and audiovisual (AV) conditions. Current density reconstruction (CDR) methods were used to model the cortical MMN generator locations. MMNs were obtained with VO and AV speech stimuli at early latencies (approximately 82-87 ms peak in time waveforms relative to the acoustic onset) and in regions of the right lateral temporal and parietal cortices. Latencies were consistent with bottom-up processing of the visible stimuli. We suggest that a visual pathway extracts phonetic cues from visible speech, and that previously reported effects of AV speech in classical early auditory areas, given later reported latencies, could be attributable to modulatory feedback from visual phonetic processing. PMID:19404730
Singh, J Suzanne; Capozzoli, Michelle C; Dodd, Michael D; Hope, Debra A
2015-01-01
A growing theoretical and research literature suggests that trait and state social anxiety can predict attentional patterns in the presence of emotional stimuli. The current study adds to this literature by examining the effects of state anxiety on visual attention and testing the vigilance-avoidance hypothesis, using a method of continuous visual attentional assessment. Participants were 91 undergraduate college students with high or low trait fear of negative evaluation (FNE), a core aspect of social anxiety, who were randomly assigned to either a high or low state anxiety condition. Participants engaged in a free view task in which pairs of emotional facial stimuli were presented and eye movements were continuously monitored. Overall, participants with high FNE avoided angry stimuli and participants with high state anxiety attended to positive stimuli. Participants with high state anxiety and high FNE were avoidant of angry faces, whereas participants with low state and low FNE exhibited a bias toward angry faces. The study provided partial support for the vigilance-avoidance hypothesis. The findings add to the mixed results in the literature that suggest that both positive and negative emotional stimuli may be important in understanding the complex attention patterns associated with social anxiety. Clinical implications and suggestions for future research are discussed.
Hannon, Erin E; Schachner, Adena; Nave-Blodgett, Jessica E
2017-07-01
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluation of Postural Control in Patients with Glaucoma Using a Virtual Reality Environment.
Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A
2015-06-01
To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in patients with glaucoma. Cross-sectional study. The study involved 42 patients with glaucoma with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Torque moments around the center of foot pressure on the force platform were measured, and the standard deviations of the torque moments (STD) were calculated as a measurement of postural stability and reported in Newton meters (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Patients with glaucoma had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) and rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared with those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with a history of falls in patients with glaucoma (incidence rate ratio, 1.85; 95% confidence interval, 1.30-2.63; P = 0.001). The study presented and validated a novel paradigm for evaluation of balance control in patients with glaucoma on the basis of the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with a history of falls and may help to provide a better understanding of balance control in patients with glaucoma. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Evaluation of Postural Control in Glaucoma Patients Using a Virtual 1 Reality Environment
Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A.
2015-01-01
Purpose To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in glaucoma patients. Design Cross-sectional study. Participants The study involved 42 glaucoma patients with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Methods Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Main Outcome Measures Torque moments around the center of foot pressure on the force platform were measured and the standard deviations (STD) of these torque moments were calculated as a measurement of postural stability and reported in Newton meter (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Results Glaucoma patients had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) as well as rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared to those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with history of falls in glaucoma patients (incidence-rate ratio = 1.85; 95% CI: 1.30 – 2.63; P = 0.001). Conclusions The study presented and validated a novel paradigm for evaluation of balance control in glaucoma patients based on the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with history of falls and may help to provide a better understanding of balance control in glaucoma patients. PMID:25892017
Shared and distinct factors driving attention and temporal processing across modalities
Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy
2013-01-01
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.
2016-01-01
Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287
Using frequency tagging to quantify attentional deployment in a visual divided attention task.
Toffanin, Paolo; de Jong, Ritske; Johnson, Addie; Martens, Sander
2009-06-01
Frequency tagging is an EEG method based on the quantification of the steady state visual evoked potential (SSVEP) elicited from stimuli which flicker with a distinctive frequency. Because the amplitude of the SSVEP is modulated by attention such that attended stimuli elicit higher SSVEP amplitudes than do ignored stimuli, the method has been used to investigate the neural mechanisms of spatial attention. However, up to now it has not been shown whether the amplitude of the SSVEP is sensitive to gradations of attention and there has been debate about whether attention effects on the SSVEP are dependent on the tagging frequency used. We thus compared attention effects on SSVEP across three attention conditions-focused, divided, and ignored-with six different tagging frequencies. Participants performed a visual detection task (respond to the digit 5 embedded in a stream of characters). Two stimulus streams, one to the left and one to the right of fixation, were displayed simultaneously, each with a background grey square whose hue was sine-modulated with one of the six tagging frequencies. At the beginning of each trial a cue indicated whether targets on the left, right, or both sides should be responded to. Accuracy was higher in the focused- than in the divided-attention condition. SSVEP amplitudes were greatest in the focused-attention condition, intermediate in the divided-attention condition, and smallest in the ignored-attention condition. The effect of attention on SSVEP amplitude did not depend on the tagging frequency used. Frequency tagging appears to be a flexible technique for studying attention.
Effects of audio-visual presentation of target words in word translation training
NASA Astrophysics Data System (ADS)
Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko
2004-05-01
Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.
Wilbertz, Gregor; Sterzer, Philipp
2018-05-01
Alternating conscious visual perception of bistable stimuli is influenced by several factors. In order to understand the effect of negative valence, we tested the effect of two types of aversive conditioning on dominance durations in binocular rivalry. Participants received either aversive classical conditioning of the stimuli shown alone between rivalry blocks, or aversive percept conditioning of one of the two possible perceptual choices during rivalry. Both groups showed successful aversive conditioning according to skin conductance responses and affective valence ratings. However, while classical conditioning led to an immediate but short-lived increase in dominance durations of the conditioned stimulus, percept conditioning yielded no significant immediate effect but tended to decrease durations of the conditioned percept during extinction. These results show dissociable effects of value learning on perceptual inference in situations of perceptual conflict, depending on whether learning relates to the decision between conflicting perceptual choices or the sensory stimuli per se. Copyright © 2018 Elsevier Inc. All rights reserved.
Mirrored and rotated stimuli are not the same: A neuropsychological and lesion mapping study.
Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Champmartin, Cécile; Pouliquen, Dorothée; Cruypeninck, Yohann; Hannequin, Didier; Gérardin, Emmanuel
2016-05-01
Agnosia for mirrored stimuli is a rare clinical deficit. Only eight patients have been reported in the literature so far and little is known about the neural substrates of this agnosia. Using a previously developed experimental test designed to assess this agnosia, namely the Mirror and Orientation Agnosia Test (MOAT), as well as voxel-lesion symptom mapping (VLSM), we tested the hypothesis that focal brain-injured patients with right parietal damage would be impaired in the discrimination between the canonical view of a visual object and its mirrored and rotated images. Thirty-four consecutively recruited patients with a stroke involving the right or left parietal lobe have been included: twenty patients (59%) had a deficit on at least one of the six conditions of the MOAT, fourteen patients (41%) had a deficit on the mirror condition, twelve patients (35%) had a deficit on at least one the four rotated conditions and one had a truly selective agnosia for mirrored stimuli. A lesion analysis showed that discrimination of mirrored stimuli was correlated to the mesial part of the posterior superior temporal gyrus and the lateral part of the inferior parietal lobule, while discrimination of rotated stimuli was correlated to the lateral part of the posterior superior temporal gyrus and the mesial part of the inferior parietal lobule, with only a small overlap between the two. These data suggest that the right visual 'dorsal' pathway is essential for accurate perception of mirrored and rotated stimuli, with a selective cognitive process and anatomical network underlying our ability to discriminate between mirrored images, different from the process of discriminating between rotated images. Copyright © 2016 Elsevier Ltd. All rights reserved.
Freezing behavior as a response to sexual visual stimuli as demonstrated by posturography.
Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre
2015-01-01
Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure's displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions.
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Finding and Not Finding Rat Perirhinal Neuronal Responses to Novelty
Muller, Robert U.; Brown, Malcolm W.
2016-01-01
ABSTRACT There is much evidence that the perirhinal cortex of both rats and monkeys is important for judging the relative familiarity of visual stimuli. In monkeys many studies have found that a proportion of perirhinal neurons respond more to novel than familiar stimuli. There are fewer studies of perirhinal neuronal responses in rats, and those studies based on exploration of objects, have raised into question the encoding of stimulus familiarity by rat perirhinal neurons. For this reason, recordings of single neuronal activity were made from the perirhinal cortex of rats so as to compare responsiveness to novel and familiar stimuli in two different behavioral situations. The first situation was based upon that used in “paired viewing” experiments that have established rat perirhinal differences in immediate early gene expression for novel and familiar visual stimuli displayed on computer monitors. The second situation was similar to that used in the spontaneous object recognition test that has been widely used to establish the involvement of rat perirhinal cortex in familiarity discrimination. In the first condition 30 (25%) of 120 perirhinal neurons were visually responsive; of these responsive neurons 19 (63%) responded significantly differently to novel and familiar stimuli. In the second condition eight (53%) of 15 perirhinal neurons changed activity significantly in the vicinity of objects (had “object fields”); however, for none (0%) of these was there a significant activity change related to the familiarity of an object, an incidence significantly lower than for the first condition. Possible reasons for the difference are discussed. It is argued that the failure to find recognition‐related neuronal responses while exploring objects is related to its detectability by the measures used, rather than the absence of all such signals in perirhinal cortex. Indeed, as shown by the results, such signals are found when a different methodology is used. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:26972751
Acquisition of Conditioning between Methamphetamine and Cues in Healthy Humans
Mayo, Leah M.; de Wit, Harriet
2016-01-01
Environmental stimuli repeatedly paired with drugs of abuse can elicit conditioned responses that are thought to promote future drug seeking. We recently showed that healthy volunteers acquired conditioned responses to auditory and visual stimuli after just two pairings with methamphetamine (MA, 20 mg, oral). This study extended these findings by systematically varying the number of drug-stimuli pairings. We expected that more pairings would result in stronger conditioning. Three groups of healthy adults were randomly assigned to receive 1, 2 or 4 pairings (Groups P1, P2 and P4, Ns = 13, 16, 16, respectively) of an auditory-visual stimulus with MA, and another stimulus with placebo (PBO). Drug-cue pairings were administered in an alternating, counterbalanced order, under double-blind conditions, during 4 hr sessions. MA produced prototypic subjective effects (mood, ratings of drug effects) and alterations in physiology (heart rate, blood pressure). Although subjects did not exhibit increased behavioral preference for, or emotional reactivity to, the MA-paired cue after conditioning, they did exhibit an increase in attentional bias (initial gaze) toward the drug-paired stimulus. Further, subjects who had four pairings reported “liking” the MA-paired cue more than the PBO cue after conditioning. Thus, the number of drug-stimulus pairings, varying from one to four, had only modest effects on the strength of conditioned responses. Further studies investigating the parameters under which drug conditioning occurs will help to identify risk factors for developing drug abuse, and provide new treatment strategies. PMID:27548681
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828
Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano
2013-01-01
The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.
Cross-modal enhancement of speech detection in young and older adults: does signal content matter?
Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra
2011-01-01
The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.
Context processing in adolescents with autism spectrum disorder: How complex could it be?
Ben-Yosef, Dekel; Anaki, David; Golan, Ofer
2017-03-01
The ability of individuals with Autism Spectrum Disorder (ASD) to process context has long been debated: According to the Weak Central Coherence theory, ASD is characterized by poor global processing, and consequently-poor context processing. In contrast, the Social Cognition theory argues individuals with ASD will present difficulties only in social context processing. The complexity theory of autism suggests context processing in ASD will depend on task complexity. The current study examined this controversy through two priming tasks, one presenting human stimuli (facial expressions) and the other presenting non-human stimuli (animal faces). Both tasks presented visual targets, preceded by congruent, incongruent, or neutral auditory primes. Local and global processing were examined by presenting the visual targets in three spatial frequency conditions: High frequency, low frequency, and broadband. Tasks were administered to 16 adolescents with high functioning ASD and 16 matched typically developing adolescents. Reaction time and accuracy were measured for each task in each condition. Results indicated that individuals with ASD processed context for both human and non-human stimuli, except in one condition, in which human stimuli had to be processed globally (i.e., target presented in low frequency). The task demands presented in this condition, and the performance deficit shown in the ASD group as a result, could be understood in terms of cognitive overload. These findings provide support for the complexity theory of autism and extend it. Our results also demonstrate how associative priming could support intact context processing of human and non-human stimuli in individuals with ASD. Autism Res 2017, 10: 520-530. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
The Hidden Snake in the Grass: Superior Detection of Snakes in Challenging Attentional Conditions
Soares, Sandra C.; Lindström, Björn; Esteves, Francisco; Öhman, Arne
2014-01-01
Snakes have provided a serious threat to primates throughout evolution. Furthermore, bites by venomous snakes still cause significant morbidity and mortality in tropical regions of the world. According to the Snake Detection Theory (SDT Isbell, 2006; 2009), the vital need to detect camouflaged snakes provided strong evolutionary pressure to develop astute perceptual capacity in animals that were potential targets for snake attacks. We performed a series of behavioral tests that assessed snake detection under conditions that may have been critical for survival. We used spiders as the control stimulus because they are also a common object of phobias and rated negatively by the general population, thus commonly lumped together with snakes as “evolutionary fear-relevant”. Across four experiments (N = 205) we demonstrate an advantage in snake detection, which was particularly obvious under visual conditions known to impede detection of a wide array of common stimuli, for example brief stimulus exposures, stimuli presentation in the visual periphery, and stimuli camouflaged in a cluttered environment. Our results demonstrate a striking independence of snake detection from ecological factors that impede the detection of other stimuli, which suggests that, consistent with the SDT, they reflect a specific biological adaptation. Nonetheless, the empirical tests we report are limited to only one aspect of this rich theory, which integrates findings across a wide array of scientific disciplines. PMID:25493937
The Hidden Snake in the Grass: Superior Detection of Snakes in Challenging Attentional Conditions.
Soares, Sandra C; Lindström, Björn; Esteves, Francisco; Ohman, Arne
2014-01-01
Snakes have provided a serious threat to primates throughout evolution. Furthermore, bites by venomous snakes still cause significant morbidity and mortality in tropical regions of the world. According to the Snake Detection Theory (SDT Isbell, 2006; 2009), the vital need to detect camouflaged snakes provided strong evolutionary pressure to develop astute perceptual capacity in animals that were potential targets for snake attacks. We performed a series of behavioral tests that assessed snake detection under conditions that may have been critical for survival. We used spiders as the control stimulus because they are also a common object of phobias and rated negatively by the general population, thus commonly lumped together with snakes as "evolutionary fear-relevant". Across four experiments (N = 205) we demonstrate an advantage in snake detection, which was particularly obvious under visual conditions known to impede detection of a wide array of common stimuli, for example brief stimulus exposures, stimuli presentation in the visual periphery, and stimuli camouflaged in a cluttered environment. Our results demonstrate a striking independence of snake detection from ecological factors that impede the detection of other stimuli, which suggests that, consistent with the SDT, they reflect a specific biological adaptation. Nonetheless, the empirical tests we report are limited to only one aspect of this rich theory, which integrates findings across a wide array of scientific disciplines.
Extinction of Conditioned Responses to Methamphetamine-Associated Stimuli in Healthy Humans.
Cavallo, Joel S; Ruiz, Nicholas A; de Wit, Harriet
2016-07-01
Contextual stimuli present during drug experiences become associated with the drug through Pavlovian conditioning and are thought to sustain drug-seeking behavior. Thus, extinction of conditioned responses is an important target for treatment. To date, acquisition and extinction to drug-paired cues have been studied in animal models or drug-dependent individuals, but rarely in non-drug users. We have recently developed a procedure to study acquisition of conditioned responses after single doses of methamphetamine (MA) in healthy volunteers. Here, we examined extinction of these responses and their persistence after conditioning. Healthy adults (18-35 years; N = 20) received two pairings of audio-visual stimuli with MA (20 mg oral) or placebo. Responses to stimuli were assessed before and after conditioning, using three tasks: behavioral preference, attentional bias, and subjective "liking." Subjects exhibited behavioral preference for the drug-paired stimuli at the first post-conditioning test, but this declined rapidly on subsequent extinction tests. They also exhibited a bias to initially look towards the drug-paired stimuli at the first post-test session, but not thereafter. Subjects who experienced more positive subjective drug effects during conditioning exhibited a smaller decline in preference during the extinction phase. Further, longer inter-session intervals during the extinction phase were associated with less extinction of the behavioral preference measure. Conditioned responses after two pairings with MA extinguish quickly, and are influenced by both subjective drug effects and the extinction interval. Characterizing and refining this conditioning procedure will aid in understanding the acquisition and extinction processes of drug-related conditioned responses in humans.
Extinction of Conditioned Responses to Methamphetamine-Associated Stimuli in Healthy Humans
Cavallo, Joel S.; Ruiz, Nicholas A.; de Wit, Harriet
2016-01-01
Rationale Contextual stimuli present during drug experiences become associated with the drug through Pavlovian conditioning, and are thought to sustain drug-seeking behavior. Thus, extinction of conditioned responses is an important target for treatment. To date, acquisition and extinction to drug-paired cues have been studied in animal models or drug-dependent individuals, but rarely in non drug-users. Objective We have recently developed a procedure to study acquisition of conditioned responses after single doses of methamphetamine (MA) in healthy volunteers. Here we examined extinction of these responses and their persistence after conditioning. Methods Healthy adults (18–35 yrs; N=20) received two pairings of audio-visual stimuli with MA (20 mg oral) or placebo. Responses to stimuli were assessed before and after conditioning, using three tasks: behavioral preference, attentional bias, and subjective ‘liking.’ Results Subjects exhibited behavioral preference for the drug-paired stimuli at the first post-conditioning test, but this declined rapidly on subsequent extinction tests. They also exhibited a bias to initially look towards the drug-paired stimuli at the first post-test session, but not thereafter. Subjects who experienced more positive subjective drug effects during conditioning exhibited a smaller decline in preference during the extinction phase. Further, longer inter-session intervals during the extinction phase were associated with less extinction of the behavioral preference measure. Conclusions Conditioned responses after two pairings with MA extinguish quickly, and are influenced by both subjective drug effects and the extinction interval. Characterizing and refining this conditioning procedure will aid in understanding the acquisition and extinction processes of drug-related conditioned responses in humans. PMID:27113223
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070
Birkett, Emma E; Talcott, Joel B
2012-01-01
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.
Forghieri, M; Monzani, D; Mackinnon, A; Ferrari, S; Gherpelli, C; Galeazzi, G M
2016-08-26
Human postural control is dependent on the central integration of vestibular, visual and proprioceptive inputs. Psychological states can affect balance control: anxiety, in particular, has been shown to influence balance mediated by visual stimuli. We hypothesized that patients with eating disorders would show postural destabilization when exposed to their image in a mirror and to the image of a fashion model representing their body ideal in comparison to body neutral stimuli. Seventeen females patients attending a day centre for the treatment of eating disorders were administered psychometric measures of body dissatisfaction, anxiety, depression and underwent posturographic measures with their eyes closed, open, watching a neutral stimulus, while exposed to a full length mirror and to an image of a fashion model corresponding to their body image. Results were compared to those obtained by eighteen healthy subjects. Eating disordered patients showed higher levels of body dissatisfaction and higher postural destabilization than controls, but this was limited to the conditions in which they were exposed to their mirror image or a fashion model image. Postural destabilization under these conditions correlated with measures of body dissatisfaction. In eating disordered patients, body related stimuli seem to act as phobic stimuli in the posturographic paradigm used. If confirmed, this has the potential to be developed for diagnostic and therapeutic purposes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mohan, Kathleen M; Miller, Joseph M; Harvey, Erin M; Gerhart, Kimberly D; Apple, Howard P; Apple, Deborah; Smith, Jordana M; Davis, Amy L; Leonard-Green, Tina; Campus, Irene; Dennis, Leslie K
2016-01-01
To determine if testing binocular visual acuity in infants and toddlers using the Acuity Card Procedure (ACP) with electronic grating stimuli yields clinically useful data. Participants were infants and toddlers ages 5 to 36.7 months referred by pediatricians due to failed automated vision screening. The ACP was used to test binocular grating acuity. Stimuli were presented on the Dobson Card. The Dobson Card consists of a handheld matte-black plexiglass frame with two flush-mounted tablet computers and is similar in size and form to commercially available printed grating acuity testing stimuli (Teller Acuity Cards II [TACII]; Stereo Optical, Inc., Chicago, IL). On each trial, one tablet displayed a square-wave grating and the other displayed a luminance-matched uniform gray patch. Stimuli were roughly equivalent to the stimuli available in the printed TACII stimuli. After acuity testing, each child received a cycloplegic eye examination. Based on cycloplegic retinoscopy, patients were categorized as having high or low refractive error per American Association for Pediatric Ophthalmology and Strabismus vision screening referral criteria. Mean acuities for high and low refractive error groups were compared using analysis of covariance, controlling for age. Mean visual acuity was significantly poorer in children with high refractive error than in those with low refractive error (P = .015). Electronic stimuli presented using the ACP can yield clinically useful measurements of grating acuity in infants and toddlers. Further research is needed to determine the optimal conditions and procedures for obtaining accurate and clinically useful automated measurements of visual acuity in infants and toddlers. Copyright 2016, SLACK Incorporated.
The human mirror neuron system: A link between action observation and social skills
Pineda, Jaime A.; Ramachandran, Vilayanur S.
2007-01-01
The discovery of the mirror neuron system (MNS) has led researchers to speculate that this system evolved from an embodied visual recognition apparatus in monkey to a system critical for social skills in humans. It is accepted that the MNS is specialized for processing animate stimuli, although the degree to which social interaction modulates the firing of mirror neurons has not been investigated. In the current study, EEG mu wave suppression was used as an index of MNS activity. Data were collected while subjects viewed four videos: (1) Visual White Noise: baseline, (2) Non-interacting: three individuals tossed a ball up in the air to themselves, (3) Social Action, Spectator: three individuals tossed a ball to each other and (4) Social Action, Interactive: similar to video 3 except occasionally the ball would be thrown off the screen toward the viewer. The mu wave was modulated by the degree of social interaction, with the Non-interacting condition showing the least suppression, followed by the Social Action, Spectator condition and the Social Action, Interactive condition showing the most suppression. These data suggest that the human MNS is specialized not only for processing animate stimuli, but specifically stimuli with social relevance. PMID:18985120
Li, Siyao; Cai, Ying; Liu, Jing; Li, Dawei; Feng, Zifang; Chen, Chuansheng; Xue, Gui
2017-04-01
Mounting evidence suggests that multiple mechanisms underlie working memory capacity. Using transcranial direct current stimulation (tDCS), the current study aimed to provide causal evidence for the neural dissociation of two mechanisms underlying visual working memory (WM) capacity, namely, the scope and control of attention. A change detection task with distractors was used, where a number of colored bars (i.e., two red bars, four red bars, or two red plus two blue bars) were presented on both sides (Experiment 1) or the center (Experiment 2) of the screen for 100ms, and participants were instructed to remember the red bars and to ignore the blue bars (in both Experiments), as well as to ignore the stimuli on the un-cued side (Experiment 1 only). In both experiments, participants finished three sessions of the task after 15min of 1.5mA anodal tDCS administered on the right prefrontal cortex (PFC), the right posterior parietal cortex (PPC), and the primary visual cortex (VC), respectively. The VC stimulation served as an active control condition. We found that compared to stimulation on the VC, stimulation on the right PPC specifically increased the visual WM capacity under the no-distractor condition (i.e., 4 red bars), whereas stimulation on the right PFC specifically increased the visual WM capacity under the distractor condition (i.e., 2 red bars plus 2 blue bars). These results suggest that the PPC and PFC are involved in the scope and control of attention, respectively. We further showed that compared to central presentation of the stimuli (Experiment 2), bilateral presentation of the stimuli (on both sides of the fixation in Experiment 1) led to an additional demand for attention control. Our results emphasize the dissociated roles of the frontal and parietal lobes in visual WM capacity, and provide a deeper understanding of the neural mechanisms of WM. Copyright © 2017 Elsevier Inc. All rights reserved.
Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K
2015-01-22
The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
Does bimodal stimulus presentation increase ERP components usable in BCIs?
NASA Astrophysics Data System (ADS)
Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.
2012-08-01
Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.
Hollingworth, Andrew; Hwang, Seongmin
2013-01-01
We examined the conditions under which a feature value in visual working memory (VWM) recruits visual attention to matching stimuli. Previous work has suggested that VWM supports two qualitatively different states of representation: an active state that interacts with perceptual selection and a passive (or accessory) state that does not. An alternative hypothesis is that VWM supports a single form of representation, with the precision of feature memory controlling whether or not the representation interacts with perceptual selection. The results of three experiments supported the dual-state hypothesis. We established conditions under which participants retained a relatively precise representation of a parcticular colour. If the colour was immediately task relevant, it reliably recruited attention to matching stimuli. However, if the colour was not immediately task relevant, it failed to interact with perceptual selection. Feature maintenance in VWM is not necessarily equivalent with feature-based attentional selection. PMID:24018723
Grassini, Simone; Holm, Suvi K; Railo, Henry; Koivisto, Mika
2016-12-01
Snakes were probably one of the earliest predators of primates, and snake images produce specific behavioral and electrophysiological reactions in humans. Pictures of snakes evoke enhanced activity over the occipital cortex, indexed by the "early posterior negativity" (EPN), as compared with pictures of other dangerous or non-dangerous animals. The present study investigated the possibility that the response to snake images is independent from visual awareness. The observers watched images of threatening and non-threatening animals presented in random order during rapid serial visual presentation. Four different masking conditions were used to manipulate awareness of the images. Electrophysiological results showed that the EPN was larger for snake images than for the other images employed in the unmasked condition. However, the difference disappeared when awareness of the stimuli decreased. Behavioral results on the effects of awareness did not show any advantage for snake images. Copyright © 2016 Elsevier B.V. All rights reserved.
Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.
Souto, David; Kerzel, Dirk
2013-02-06
Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.
Ivanova, Maria V.; Hallowell, Brooke
2017-01-01
Purpose Language comprehension in people with aphasia (PWA) is frequently evaluated using multiple-choice displays: PWA are asked to choose the image that best corresponds to the verbal stimulus in a display. When a nontarget image is selected, comprehension failure is assumed. However, stimulus-driven factors unrelated to linguistic comprehension may influence performance. In this study we explore the influence of physical image characteristics of multiple-choice image displays on visual attention allocation by PWA. Method Eye fixations of 41 PWA were recorded while they viewed 40 multiple-choice image sets presented with and without verbal stimuli. Within each display, 3 images (majority images) were the same and 1 (singleton image) differed in terms of 1 image characteristic. The mean proportion of fixation duration (PFD) allocated across majority images was compared against the PFD allocated to singleton images. Results PWA allocated significantly greater PFD to the singleton than to the majority images in both nonverbal and verbal conditions. Those with greater severity of comprehension deficits allocated greater PFD to nontarget singleton images in the verbal condition. Conclusion When using tasks that rely on multiple-choice displays and verbal stimuli, one cannot assume that verbal stimuli will override the effect of visual-stimulus characteristics. PMID:28520866
The effect of changing the secondary task in dual-task paradigms for measuring listening effort.
Picou, Erin M; Ricketts, Todd A
2014-01-01
The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.
Visual processing of moving and static self body-parts.
Frassinetti, Francesca; Pavani, Francesco; Zamagni, Elisa; Fusaroli, Giulia; Vescovi, Massimo; Benassi, Mariagrazia; Avanzi, Stefano; Farnè, Alessandro
2009-07-01
Humans' ability to recognize static images of self body-parts can be lost following a lesion of the right hemisphere [Frassinetti, F., Maini, M., Romualdi, S., Galante, E., & Avanzi, S. (2008). Is it mine? Hemispheric asymmetries in corporeal self-recognition. Journal of Cognitive Neuroscience, 20, 1507-1516]. Here we investigated whether the visual information provided by the movement of self body-parts may be separately processed by right brain-damaged (RBD) patients and constitute a valuable cue to reduce their deficit in self body-parts processing. To pursue these aims, neurological healthy subjects and RBD patients were submitted to a matching-task of a pair of subsequent visual stimuli, in two conditions. In the dynamic condition, participants were shown movies of moving body-parts (hand, foot, arm and leg); in the static condition, participants were shown still images of the same body-parts. In each condition, on half of the trials at least one stimulus in the pair was from the participant's own body ('Self' condition), whereas on the remaining half of the trials both stimuli were from another person ('Other' condition). Results showed that in healthy participants the self-advantage was present when processing both static and dynamic body-parts, but it was more important in the latter condition. In RBD patients, however, the self-advantage was absent in the static, but present in the dynamic body-parts condition. These findings suggest that visual information from self body-parts in motion may be processed independently in patients with impaired static self-processing, thus pointing to a modular organization of the mechanisms responsible for the self/other distinction.
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Effects of Visual and Verbal Stimuli on Children's Learning of Concrete and Abstract Prose.
ERIC Educational Resources Information Center
Hannafin, Michael J.; Carey, James O.
A total of 152 fourth grade students participated in a study examining the effects of visual-only, verbal-only, and combined audiovisual prose presentations and different elaboration strategy conditions on student learning of abstract and concrete prose. The students saw and/or heard a short animated story, during which they were instructed to…
Is Visually Guided Reaching in Early Infancy a Myth?
ERIC Educational Resources Information Center
Clifton, Rachel K.; And Others
1993-01-01
Seven infants were tested between the ages of 6 and 25 weeks to see how they would grasp objects presented in full light and glowing or sounding objects presented in total darkness. In all three conditions, the infants first grasped the objects at nearly the same time, suggesting that internal stimuli, not visual guidance, directed their actions.…
Eye Movements Affect Postural Control in Young and Older Females
Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan
2016-01-01
Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412
Eye Movements Affect Postural Control in Young and Older Females.
Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan
2016-01-01
Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.
Brain reactivity to visual food stimuli after moderate-intensity exercise in children.
Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; Larson, Michael J; Keller, Kathleen L; Fearnbach, S Nicole; Evans, Alyssa; LeCheminant, James D
2017-09-19
Exercise may play a role in moderating eating behaviors. The purpose of this study was to examine the effect of an acute bout of exercise on neural responses to visual food stimuli in children ages 8-11 years. We hypothesized that acute exercise would result in reduced activity in reward areas of the brain. Using a randomized cross-over design, 26 healthy weight children completed two separate laboratory conditions (exercise; sedentary). During the exercise condition, each participant completed a 30-min bout of exercise at moderate-intensity (~ 67% HR maximum) on a motor-driven treadmill. During the sedentary session, participants sat continuously for 30 min. Neural responses to high- and low-calorie pictures of food were determined immediately following each condition using functional magnetic resonance imaging. There was a significant exercise condition*stimulus-type (high- vs. low-calorie pictures) interaction in the left hippocampus and right medial temporal lobe (p < 0.05). Main effects of exercise condition were observed in the left posterior central gyrus (reduced activation after exercise) (p < 0.05) and the right anterior insula (greater activation after exercise) (p < 0.05). The left hippocampus, right medial temporal lobe, left posterior central gyrus, and right anterior insula appear to be activated by visual food stimuli differently following an acute bout of exercise compared to a non-exercise sedentary session in 8-11 year-old children. Specifically, an acute bout of exercise results in greater activation to high-calorie and reduced activation to low-calorie pictures of food in both the left hippocampus and right medial temporal lobe. This study shows that response to external food cues can be altered by exercise and understanding this mechanism will inform the development of future interventions aimed at altering energy intake in children.
Freezing Behavior as a Response to Sexual Visual Stimuli as Demonstrated by Posturography
Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre
2015-01-01
Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure’s displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions. PMID:25992571
When do letter features migrate? A boundary condition for feature-integration theory.
Butler, B E; Mewhort, D J; Browse, R A
1991-01-01
Feature-integration theory postulates that a lapse of attention will allow letter features to change position and to recombine as illusory conjunctions (Treisman & Paterson, 1984). To study such errors, we used a set of uppercase letters known to yield illusory conjunctions in each of three tasks. The first, a bar-probe task, showed whole-character mislocations but not errors based on feature migration and recombination. The second, a two-alternative forced-choice detection task, allowed subjects to focus on the presence or absence of subletter features and showed illusory conjunctions based on feature migration and recombination. The third was also a two-alternative forced-choice detection task, but we manipulated the subjects' knowledge of the shape of the stimuli: In the case-certain condition, the stimuli were always in uppercase, but in the case-uncertain condition, the stimuli could appear in either upper- or lowercase. Subjects in the case-certain condition produced illusory conjunctions based on feature recombination, whereas subjects in the case-uncertain condition did not. The results suggest that when subjects can view the stimuli as feature groups, letter features regroup as illusory conjunctions; when subjects encode the stimuli as letters, whole items may be mislocated, but subletter features are not. Thus, illusory conjunctions reflect the subject's processing strategy, rather than the architecture of the visual system.
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.
Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan
2015-01-01
In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.
Yoshizaki, K
2001-12-01
The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.
Gravity and perceptual stability during translational head movement on earth and in microgravity.
Jaekl, P; Zikovitz, D C; Jenkin, M R; Jenkin, H L; Zacher, J E; Harris, L R
2005-01-01
We measured the amount of visual movement judged consistent with translational head movement under normal and microgravity conditions. Subjects wore a virtual reality helmet in which the ratio of the movement of the world to the movement of the head (visual gain) was variable. Using the method of adjustment under normal gravity 10 subjects adjusted the visual gain until the visual world appeared stable during head movements that were either parallel or orthogonal to gravity. Using the method of constant stimuli under normal gravity, seven subjects moved their heads and judged whether the virtual world appeared to move "with" or "against" their movement for several visual gains. One subject repeated the constant stimuli judgements in microgravity during parabolic flight. The accuracy of judgements appeared unaffected by the direction or absence of gravity. Only the variability appeared affected by the absence of gravity. These results are discussed in relation to discomfort during head movements in microgravity. c2005 Elsevier Ltd. All rights reserved.
The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.
van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R
2018-05-04
Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task
Roverud, Elin; Streeter, Timothy; Mason, Christine R.; Kidd, Gerald
2017-01-01
The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate. PMID:28758567
Effects of spatial cues on color-change detection in humans
Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.
2015-01-01
Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults
Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.
2016-01-01
The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081
Temporal Influence on Awareness
1995-12-01
43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and
Electrophysiological correlates of looking at paintings and its association with art expertise.
Pang, C Y; Nadal, M; Müller-Paul, J S; Rosenberg, R; Klein, C
2013-04-01
This study investigated the electrocortical correlates of art expertise, as defined by a newly developed, content-valid and internally consistent 23-item art expertise questionnaire in N=27 participants that varied in their degree of art expertise. Participants viewed each 50 paintings, filtering-distorted versions of these paintings and plain colour stimuli under free-viewing conditions whilst the EEG was recorded from 64 channels. Results revealed P3b-/LPC-like bilateral posterior event-related potentials (ERP) that were larger over the right hemisphere than over the left hemisphere. Art expertise correlated negatively with the amplitude of the ERP responses to paintings and control stimuli. We conclude that art expertise is associated with reduced ERP responses to visual stimuli in general that can be considered to reflect increased neural efficiency due to extensive practice in the contemplation of visual art. Copyright © 2012 Elsevier B.V. All rights reserved.
On the role of covarying functions in stimulus class formation and transfer of function.
Markham, Rebecca G; Markham, Michael R
2002-01-01
This experiment investigated whether directly trained covarying functions are necessary for stimulus class formation and transfer of function in humans. Initial class training was designed to establish two respondent-based stimulus classes by pairing two visual stimuli with shock and two other visual stimuli with no shock. Next, two operant discrimination functions were trained to one stimulus of each putative class. The no-shock group received the same training and testing in all phases, except no stimuli were ever paired with shock. The data indicated that skin conductance response conditioning did not occur for the shock groups or for the no-shock group. Tests showed transfer of the established discriminative functions, however, only for the shock groups, indicating the formation of two stimulus classes only for those participants who received respondent class training. The results suggest that transfer of function does not depend on first covarying the stimulus class functions. PMID:12507017
Hedger, Nicholas; Gray, Katie L H; Garner, Matthew; Adams, Wendy J
2016-09-01
Given capacity limits, only a subset of stimuli give rise to a conscious percept. Neurocognitive models suggest that humans have evolved mechanisms that operate without awareness and prioritize threatening stimuli over neutral stimuli in subsequent perception. In this meta-analysis, we review evidence for this 'standard hypothesis' emanating from 3 widely used, but rather different experimental paradigms that have been used to manipulate awareness. We found a small pooled threat-bias effect in the masked visual probe paradigm, a medium effect in the binocular rivalry paradigm and highly inconsistent effects in the breaking continuous flash suppression paradigm. Substantial heterogeneity was explained by the stimulus type: the only threat stimuli that were robustly prioritized across all 3 paradigms were fearful faces. Meta regression revealed that anxiety may modulate threat-biases, but only under specific presentation conditions. We also found that insufficiently rigorous awareness measures, inadequate control of response biases and low level confounds may undermine claims of genuine unconscious threat processing. Considering the data together, we suggest that uncritical acceptance of the standard hypothesis is premature: current behavioral evidence for threat-sensitive visual processing that operates without awareness is weak. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.
Beveridge, R; Wilson, S; Coyle, D
2016-01-01
A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.
Comparative psychophysics of bumblebee and honeybee colour discrimination and object detection.
Dyer, Adrian G; Spaethe, Johannes; Prack, Sabina
2008-07-01
Bumblebee (Bombus terrestris) discrimination of targets with broadband reflectance spectra was tested using simultaneous viewing conditions, enabling an accurate determination of the perceptual limit of colour discrimination excluding confounds from memory coding (experiment 1). The level of colour discrimination in bumblebees, and honeybees (Apis mellifera) (based upon previous observations), exceeds predictions of models considering receptor noise in the honeybee. Bumblebee and honeybee photoreceptors are similar in spectral shape and spacing, but bumblebees exhibit significantly poorer colour discrimination in behavioural tests, suggesting possible differences in spatial or temporal signal processing. Detection of stimuli in a Y-maze was evaluated for bumblebees (experiment 2) and honeybees (experiment 3). Honeybees detected stimuli containing both green-receptor-contrast and colour contrast at a visual angle of approximately 5 degrees , whilst stimuli that contained only colour contrast were only detected at a visual angle of 15 degrees . Bumblebees were able to detect these stimuli at a visual angle of 2.3 degrees and 2.7 degrees , respectively. A comparison of the experiments suggests a tradeoff between colour discrimination and colour detection in these two species, limited by the need to pool colour signals to overcome receptor noise. We discuss the colour processing differences and possible adaptations to specific ecological habitats.
Early visual processing is enhanced in the midluteal phase of the menstrual cycle.
Lusk, Bethany R; Carr, Andrea R; Ranson, Valerie A; Bryant, Richard A; Felmingham, Kim L
2015-12-01
Event-related potential (ERP) studies have revealed an early attentional bias in processing unpleasant emotional images in women. Recent neuroimaging data suggests there are significant differences in cortical emotional processing according to menstrual phase. This study examined the impact of menstrual phase on visual emotional processing in women compared to men. ERPs were recorded from 28 early follicular women, 29 midluteal women, and 27 men while they completed a passive viewing task of neutral and low- and high- arousing pleasant and unpleasant images. There was a significant effect of menstrual phase in early visual processing, as midluteal women displayed significantly greater P1 amplitude at occipital regions to all visual images compared to men. Both midluteal and early follicular women displayed larger N1 amplitudes than men (although this only reached significance for the midluteal group) to the visual images. No sex or menstrual phase differences were apparent in later N2, P3, or LPP. A condition effect demonstrated greater P3 and LPP amplitude to highly-arousing unpleasant images relative to all other stimuli conditions. These results indicate that women have greater early automatic visual processing compared to men, and suggests that this effect is particularly strong in women in the midluteal phase at the earliest stage of visual attention processing. Our findings highlight the importance of considering menstrual phase when examining sex differences in the cortical processing of visual stimuli. Copyright © 2015 Elsevier Ltd. All rights reserved.
The effects of auditive and visual settings on perceived restoration likelihood
Jahncke, Helena; Eriksson, Karolina; Naula, Sanna
2015-01-01
Research has so far paid little attention to how environmental sounds might affect restorative processes. The aim of the present study was to investigate the effects of auditive and visual stimuli on perceived restoration likelihood and attitudes towards varying environmental resting conditions. Assuming a condition of cognitive fatigue, all participants (N = 40) were presented with images of an open plan office and urban nature, each under four sound conditions (nature sound, quiet, broadband noise, office noise). After the presentation of each setting/sound combination, the participants assessed it according to restorative qualities, restoration likelihood and attitude. The results mainly showed predicted effects of the sound manipulations on the perceived restorative qualities of the settings. Further, significant interactions between auditive and visual stimuli were found for all measures. Both nature sounds and quiet more positively influenced evaluations of the nature setting compared to the office setting. When office noise was present, both settings received poor evaluations. The results agree with expectations that nature sounds and quiet areas support restoration, while office noise and broadband noise (e.g. ventilation, traffic noise) do not. The findings illustrate the significance of environmental sound for restorative experience. PMID:25599752
Effect of visual distortion on postural balance in a full immersion stereoscopic environment
NASA Astrophysics Data System (ADS)
Faubert, Jocelyn; Allard, Remy
2004-05-01
This study attempted to determine the influence of non-linear visual movements on our capacity to maintain postural control. An 8x8x8 foot CAVE immersive virtual environment was used. Body sway recordings were obtained for both head and lower back (lumbar 2-3) positions. The subjects were presented with visual stimuli for periods of 62.5 seconds. Subjects were asked to stand still on one foot while viewing stimuli consisting of multiplied sine waves generating movement undulation of a textured surface (waves moving in checkerboard pattern). Three wave amplitudes were tested: 4 feet, 2 feet, and 1 foot. Two viewing conditions were also used; observers looking at 36 inches in front of their feet; observers looking at a distance near the horizon. The results were compiled using an instability index and the data showed a profound and consistent effect of visual disturbances on postural balance in particular for the x (side-to-side) movement. We have demonstrated that non-linear visual distortions similar to those generated by progressive ophthalmic lenses of the kind used for presbyopia corrections, can generate significant postural instability. This instability is particularly evident for the side-to-side body movement and is most evident for the near viewing condition.
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.
Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd
2014-11-01
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.
Age, familiarity, and visual processing schemes.
De Haven, D T; Roberts-Gray, C
1978-10-01
In a partial-report task adults and 5-yr.-old children identified stimuli of two types (common objects and familiar common objects) in two representations (black-and-white line drawings or full color photographs). It was hypothesized that familiar items and photographic representation would enhance the children's accuracy. Although both children and adults were more accurate when the stimuli were from the familiar set, children performed more accurate when the stimuli were from the familiar set, children performed poorly in all stimulus conditions. Results suggest that the age difference in this task reflects the "concrete" nature of the perceptual process in children.
The effect of encoding conditions on learning in the prototype distortion task.
Lee, Jessica C; Livesey, Evan J
2017-06-01
The prototype distortion task demonstrates that it is possible to learn about a category of physically similar stimuli through mere observation. However, there have been few attempts to test whether different encoding conditions affect learning in this task. This study compared prototypicality gradients produced under incidental learning conditions in which participants performed a visual search task, with those produced under intentional learning conditions in which participants were required to memorize the stimuli. Experiment 1 showed that similar prototypicality gradients could be obtained for category endorsement and familiarity ratings, but also found (weaker) prototypicality gradients in the absence of exposure. In Experiments 2 and 3, memorization was found to strengthen prototypicality gradients in familiarity ratings in comparison to visual search, but there were no group differences in participants' ability to discriminate between novel and presented exemplars. Although the Search groups in Experiments 2 and 3 produced prototypicality gradients, they were no different in magnitude to those produced in the absence of stimulus exposure in Experiment 1, suggesting that incidental learning during visual search was not conducive to producing prototypicality gradients. This study suggests that learning in the prototype distortion task is not implicit in the sense of resulting automatically from exposure, is affected by the nature of encoding, and should be considered in light of potential learning-at-test effects.
Papera, Massimiliano; Richards, Anne
2016-05-01
Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
TVA-Based Assessment of Visual Attention Using Line-Drawings of Fruits and Vegetables
Wang, Tianlu; Gillebert, Celine R.
2018-01-01
Visuospatial attention and short-term memory allow us to prioritize, select, and briefly maintain part of the visual information that reaches our senses. These cognitive abilities are quantitatively accounted for by Bundesen’s theory of visual attention (TVA; Bundesen, 1990). Previous studies have suggested that TVA-based assessments are sensitive to inter-individual differences in spatial bias, visual short-term memory capacity, top-down control, and processing speed in healthy volunteers as well as in patients with various neurological and psychiatric conditions. However, most neuropsychological assessments of attention and executive functions, including TVA-based assessment, make use of alphanumeric stimuli and/or are performed verbally, which can pose difficulties for individuals who have troubles processing letters or numbers. Here we examined the reliability of TVA-based assessments when stimuli are used that are not alphanumeric, but instead based on line-drawings of fruits and vegetables. We compared five TVA parameters quantifying the aforementioned cognitive abilities, obtained by modeling accuracy data on a whole/partial report paradigm using conventional alphabet stimuli versus the food stimuli. Significant correlations were found for all TVA parameters, indicating a high parallel-form reliability. Split-half correlations assessing internal reliability, and correlations between predicted and observed data assessing goodness-of-fit were both significant. Our results provide an indication that line-drawings of fruits and vegetables can be used for a reliable assessment of attention and short-term memory. PMID:29535660
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
Audio-visual integration through the parallel visual pathways.
Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond
2015-10-22
Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Dissociating emotion-induced blindness and hypervision.
Bocanegra, Bruno R; Zeelenberg, René
2009-12-01
Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.
Hayne, Harlene; Jaeger, Katja; Sonne, Trine; Gross, Julien
2016-11-01
The visual recognition memory (VRM) paradigm has been widely used to measure memory during infancy and early childhood; it has also been used to study memory in human and nonhuman adults. Typically, participants are familiarized with stimuli that have no special significance to them. Under these conditions, greater attention to the novel stimulus during the test (i.e., novelty preference) is used as the primary index of memory. Here, we took a novel approach to the VRM paradigm and tested 1-, 2-, and 3-year olds using photos of meaningful stimuli that were drawn from the participants' own environment (e.g., photos of their mother, father, siblings, house). We also compared their performance to that of participants of the same age who were tested in an explicit pointing version of the VRM task. Two- and 3-year olds exhibited a strong familiarity preference for some, but not all, of the meaningful stimuli; 1-year olds did not. At no age did participants exhibit the kind of novelty preference that is commonly used to define memory in the VRM task. Furthermore, when compared to pointing, looking measures provided a rough approximation of recognition memory, but in some instances, the looking measure underestimated retention. The use of meaningful stimuli raise important questions about the way in which visual attention is interpreted in the VRM paradigm, and may provide new opportunities to measure memory during infancy and early childhood. © 2016 Wiley Periodicals, Inc.
Weidemann, Gabrielle; Satkunarajah, Michelle; Lovibond, Peter F.
2016-01-01
Can conditioning occur without conscious awareness of the contingency between the stimuli? We trained participants on two separate reaction time tasks that ensured attention to the experimental stimuli. The tasks were then interleaved to create a differential Pavlovian contingency between visual stimuli from one task and an airpuff stimulus from the other. Many participants were unaware of the contingency and failed to show differential eyeblink conditioning, despite attending to a salient stimulus that was contingently and contiguously related to the airpuff stimulus over many trials. Manipulation of awareness by verbal instruction dramatically increased awareness and differential eyeblink responding. These findings cast doubt on dual-system theories, which propose an automatic associative system independent of cognition, and provide strong evidence that cognitive processes associated with awareness play a causal role in learning. PMID:26905277
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-01-01
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245
Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin
2010-05-24
There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.
Peyrin, C; Démonet, J F; N'Guyen-Morel, M A; Le Bas, J F; Valdois, S
2011-09-01
A visual attention (VA) span disorder has been reported in dyslexic children as potentially responsible for their poor reading outcome. The purpose of the current paper was to identify the cerebral correlates of this VA span disorder. For this purpose, 12 French dyslexic children with severe reading and VA span disorders and 12 age-matched control children were engaged in a categorisation task under fMRI. Two flanked and isolated conditions were designed which both involved multiple-element simultaneous visual processing but taxed visual attention differently. For skilled readers, flanked stimuli processing activated a large bilateral cortical network comprising the superior and inferior parietal cortex, the inferior temporal cortex, the striate and extrastriate visual cortex, the middle frontal cortex and the anterior cingulate cortex while the less attention-demanding task of isolated stimuli only activated the inferior occipito-temporal cortex bilaterally. With respect to controls, the dyslexic children showed significantly reduced activation within bilateral parietal and temporal areas during flanked processing, but no difference during the isolated condition. The neural correlates of the processes involved in attention-demanding multi-element processing tasks were more specifically addressed by contrasting the flanked and the isolated conditions. This contrast elicited activation of the left precuneus/superior parietal lobule in the controls, but not in the dyslexic children. These findings provide new insights on the role of parietal regions, in particular the left superior parietal lobule, in the visual attention span and in developmental dyslexia. Copyright © 2010 Elsevier Inc. All rights reserved.
The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye
NASA Astrophysics Data System (ADS)
Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis
2008-09-01
In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).
Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana
2015-05-01
To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli
Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.
2009-01-01
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778
Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.
Störmer, Viola S; McDonald, John J; Hillyard, Steven A
2009-12-29
The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
De Loof, Esther; Van Opstal, Filip; Verguts, Tom
2016-04-01
Theories on visual awareness claim that predicted stimuli reach awareness faster than unpredicted ones. In the current study, we disentangle whether prior information about the upcoming stimulus affects visual awareness of stimulus location (i.e., individuation) by modulating processing efficiency or threshold setting. Analogous research on stimulus identification revealed that prior information modulates threshold setting. However, as identification and individuation are two functionally and neurally distinct processes, the mechanisms underlying identification cannot simply be extrapolated directly to individuation. The goal of this study was therefore to investigate how individuation is influenced by prior information about the upcoming stimulus. To do so, a drift diffusion model was fitted to estimate the processing efficiency and threshold setting for predicted versus unpredicted stimuli in a cued individuation paradigm. Participants were asked to locate a picture, following a cue that was congruent, incongruent or neutral with respect to the picture's identity. Pictures were individuated faster in the congruent and neutral condition compared to the incongruent condition. In the diffusion model analysis, the processing efficiency was not significantly different across conditions. However, the threshold setting was significantly higher following an incongruent cue compared to both congruent and neutral cues. Our results indicate that predictive information about the upcoming stimulus influences visual awareness by shifting the threshold for individuation rather than by enhancing processing efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Altered prefrontal function with aging: insights into age-associated performance decline.
Solbakk, Anne-Kristin; Fuhrmann Alpert, Galit; Furst, Ansgar J; Hale, Laura A; Oga, Tatsuhide; Chetty, Sundari; Pickard, Natasha; Knight, Robert T
2008-09-26
We examined the effects of aging on visuo-spatial attention. Participants performed a bi-field visual selective attention task consisting of infrequent target and task-irrelevant novel stimuli randomly embedded among repeated standards in either attended or unattended visual fields. Blood oxygenation level dependent (BOLD) responses to the different classes of stimuli were measured using functional magnetic resonance imaging. The older group had slower reaction times to targets, and committed more false alarms but had comparable detection accuracy to young controls. Attended target and novel stimuli activated comparable widely distributed attention networks, including anterior and posterior association cortex, in both groups. The older group had reduced spatial extent of activation in several regions, including prefrontal, basal ganglia, and visual processing areas. In particular, the anterior cingulate and superior frontal gyrus showed more restricted activation in older compared with young adults across all attentional conditions and stimulus categories. The spatial extent of activations correlated with task performance in both age groups, but the regional pattern of association between hemodynamic responses and behavior differed between the groups. Whereas the young subjects relied on posterior regions, the older subjects engaged frontal areas. The results indicate that aging alters the functioning of neural networks subserving visual attention, and that these changes are related to cognitive performance.
Working memory enhances visual perception: evidence from signal detection analysis.
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W
2010-03-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.
Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U
2016-01-01
The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.
Tyndall, Ian; Ragless, Liam; O'Hora, Denis
2018-04-01
The present study examined whether increasing visual perceptual load differentially affected both Socially Meaningful and Non-socially Meaningful auditory stimulus awareness in neurotypical (NT, n = 59) adults and Autism Spectrum Disorder (ASD, n = 57) adults. On a target trial, an unexpected critical auditory stimulus (CAS), either a Non-socially Meaningful ('beep' sound) or Socially Meaningful ('hi') stimulus, was played concurrently with the presentation of the visual task. Under conditions of low visual perceptual load both NT and ASD samples reliably noticed the CAS at similar rates (77-81%), whether the CAS was Socially Meaningful or Non-socially Meaningful. However, during high visual perceptual load NT and ASD participants reliably noticed the meaningful CAS (NT = 71%, ASD = 67%), but NT participants were unlikely to notice the Non-meaningful CAS (20%), whereas ASD participants reliably noticed it (80%), suggesting an inability to engage selective attention to ignore non-salient irrelevant distractor stimuli in ASD. Copyright © 2018 Elsevier Inc. All rights reserved.
fMRI during natural sleep as a method to study brain function during early childhood.
Redcay, Elizabeth; Kennedy, Daniel P; Courchesne, Eric
2007-12-01
Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Hogarth, Lee; Dickinson, Anthony; Duka, Theodora
2003-08-01
Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.
Compatibility of motion facilitates visuomotor synchronization.
Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L
2010-12-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Impairment in Emotional Modulation of Attention and Memory in Schizophrenia
Walsh-Messinger, Julie; Ramirez, Paul Michael; Wong, Philip; Antonius, Daniel; Aujero, Nicole; McMahon, Kevin; Opler, Lewis A.; Malaspina, Dolores
2014-01-01
Emotion plays a critical role in cognition and goal-directed behavior via complex interconnections between the emotional and motivational systems. It has been hypothesized that the impairment in goal-directed behavior widely noted in schizophrenia may result from defects in the interaction between the neural (ventral) emotional system and (rostral) cortical processes. The present study examined the impact of emotion on attention and memory in schizophrenia. Twenty-five individuals with schizophrenia related psychosis and 25 healthy control subjects were administered a computerized task in which they were asked to search for target images during a rapid serial visual presentation of pictures. Target stimuli were either positive, negative, or neutral images presented at either 200ms or 700ms lag. Additionally, a visual hedonics task was used to assess differences between the schizophrenia group and controls on ratings of valence and arousal from the picture stimuli. Compared to controls, individuals with schizophrenia detected fewer emotional images under both the 200ms and 700ms lag conditions. Multivariate analyses showed that the schizophrenia group also detected fewer positive images under the 700 lag condition and fewer negative images under the 200 lag condition. Individuals with schizophrenia reported higher pleasantness and unpleasantness ratings than controls in response to neutral stimuli, while controls reported higher arousal ratings for neutral and positive stimuli compared to the schizophrenia group. These results highlight dysfunction in the neural modulation of emotion, attention, and cortical processing in schizophrenia, adding to the growing but mixed body of literature on emotion processing in the disorder. PMID:24910446
Impairment in emotional modulation of attention and memory in schizophrenia.
Walsh-Messinger, Julie; Ramirez, Paul Michael; Wong, Philip; Antonius, Daniel; Aujero, Nicole; McMahon, Kevin; Opler, Lewis A; Malaspina, Dolores
2014-08-01
Emotion plays a critical role in cognition and goal-directed behavior via complex interconnections between the emotional and motivational systems. It has been hypothesized that the impairment in goal-directed behavior widely noted in schizophrenia may result from defects in the interaction between the neural (ventral) emotional system and (rostral) cortical processes. The present study examined the impact of emotion on attention and memory in schizophrenia. Twenty-five individuals with schizophrenia related psychosis and 25 healthy control subjects were administered a computerized task in which they were asked to search for target images during a Rapid Serial Visual Presentation of pictures. Target stimuli were either positive or negative, or neutral images presented at either 200ms or 700ms lag. Additionally, a visual hedonic task was used to assess differences between the schizophrenia group and controls on ratings of valence and arousal from the picture stimuli. Compared to controls, individuals with schizophrenia detected fewer emotional images under both the 200ms and 700ms lag conditions. Multivariate analyses showed that the schizophrenia group also detected fewer positive images under the 700ms lag condition and fewer negative images under the 200ms lag condition. Individuals with schizophrenia reported higher pleasantness and unpleasantness ratings than controls in response to neutral stimuli, while controls reported higher arousal ratings for neutral and positive stimuli compared to the schizophrenia group. These results highlight dysfunction in the neural modulation of emotion, attention, and cortical processing in schizophrenia, adding to the growing but mixed body of literature on emotion processing in the disorder. Published by Elsevier B.V.
Caffeine Improves Left Hemisphere Processing of Positive Words
Kuchinke, Lars; Lux, Vanessa
2012-01-01
A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893
Processing of voices in deafness rehabilitation by auditory brainstem implant.
Coez, Arnaud; Zilbovicius, Monica; Ferrary, Evelyne; Bouccara, Didier; Mosnier, Isabelle; Ambert-Dahan, Emmanuèle; Kalamarides, Michel; Bizaguet, Eric; Syrota, André; Samson, Yves; Sterkers, Olivier
2009-10-01
The superior temporal sulcus (STS) is specifically involved in processing the human voice. Profound acquired deafness by post-meningitis ossified cochlea and by bilateral vestibular schwannoma in neurofibromatosis type 2 patients are two indications for auditory brainstem implantation (ABI). In order to objectively measure the cortical voice processing of a group of ABI patients, we studied the activation of the human temporal voice areas (TVA) by PET H(2)(15)O, performed in a group of implanted deaf adults (n=7) with more than two years of auditory brainstem implant experience, with an intelligibility score average of 17%+/-17 [mean+/-SD]. Relative cerebral blood flow (rCBF) was measured in the three following conditions: during silence, while passive listening to human voice, and to non-voice stimuli. Compared to silence, the activations induced by voice and non-voice stimuli were bilaterally located in the superior temporal regions. However, compared to non-voice stimuli, the voice stimuli did not induce specific supplementary activation of the TVA along the STS. The comparison of ABI group with a normal-hearing controls group (n=7) showed that TVA activations were significantly enhanced among controls group. ABI allowed the transmission of sound stimuli to temporal brain regions but lacked transmitting the specific cues of the human voice to the TVA. Moreover, among groups, during silent condition, brain visual regions showed higher rCBF in ABI group, although temporal brain regions had higher rCBF in the controls group. ABI patients had consequently developed enhanced visual strategies to keep interacting with their environment.
Implications of differences of echoic and iconic memory for the design of multimodal displays
NASA Astrophysics Data System (ADS)
Glaser, Daniel Shields
It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.
ERIC Educational Resources Information Center
Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun
2012-01-01
This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…
Thinking about touch facilitates tactile but not auditory processing.
Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris
2012-05-01
Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.
Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage
Garaizar, Pablo; Vadillo, Miguel A.
2014-01-01
In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382
NASA Astrophysics Data System (ADS)
Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.
2014-11-01
The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.
The grouping benefit in extinction: overcoming the temporal order bias.
Rappaport, Sarah J; Riddoch, M Jane; Humphreys, Glyn W
2011-01-01
Grouping between contra- and ipsilesional stimuli can alleviate the lateralised bias in spatial extinction (Gilchrist, Humphreys, & Riddoch, 1996; Ward, Goodrich, & Driver, 1994). In the current study we demonstrate for the first time that perceptual grouping can also modulate the spatio/temporal biases in temporal order judgements affecting the temporal as well as the spatial coding of stimuli. Perceived temporal order was assessed by presenting two coloured letter stimuli in either hemi-field temporally segregated by a range of onset-intervals. Items were either identical (grouping condition) or differed in both shape and colour (non-grouping condition). Observers were required to indicate which item appeared second. Patients with visual extinction had a bias against the contralesional item appearing first, but this was modulated by perceptual grouping. When both items were identical in shape and colour the temporal bias against reporting the contralesional item was reduced. The results suggest that grouping can alter the coding of temporal relations between stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.
Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.
Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong
2016-08-01
The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
Modulation of early cortical processing during divided attention to non-contiguous locations
Frey, Hans-Peter; Schmid, Anita M.; Murphy, Jeremy W.; Molholm, Sophie; Lalor, Edmund C.; Foxe, John J.
2015-01-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. While for several years the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed using high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classical pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing timeframes in hierarchically early visual regions and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. PMID:24606564
Ito, Rutsuko; Everitt, Barry J; Robbins, Trevor W
2005-01-01
The hippocampus (HPC) is known to be critically involved in the formation of associations between contextual/spatial stimuli and behaviorally significant events, playing a pivotal role in learning and memory. However, increasing evidence indicates that the HPC is also essential for more basic motivational processes. The amygdala, by contrast, is important for learning about the motivational significance of discrete cues. This study investigated the effects of excitotoxic lesions of the rat HPC and the basolateral amygdala (BLA) on the acquisition of a number of appetitive behaviors known to be dependent on the formation of Pavlovian associations between a reward (food) and discrete stimuli or contexts: (1) conditioned/anticipatory locomotor activity to food delivered in a specific context and (2) autoshaping, where rats learn to show conditioned discriminated approach to a discrete visual CS+. While BLA lesions had minimal effects on conditioned locomotor activity, hippocampal lesions facilitated the development of both conditioned activity to food and autoshaping behavior, suggesting that hippocampal lesions may have increased the incentive motivational properties of food and associated conditioned stimuli, consistent with the hypothesis that the HPC is involved in inhibitory processes in appetitive conditioning. (c) 2005 Wiley-Liss, Inc.
Visual discrimination transfer and modulation by biogenic amines in honeybees.
Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo
2018-05-10
For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.
Muiños, Mónica; Ballesteros, Soledad
2014-11-01
The present study investigated peripheral vision (PV) and perceptual asymmetries in young and older martial arts athletes (judo and karate athletes) and compared their performance with that of young and older nonathletes. Stimuli were dots presented at three different eccentricities along the horizontal, oblique, and vertical diameters and three interstimulus intervals. Experiment 1 showed that although the two athlete groups were faster in almost all conditions, karate athletes performed significantly better than nonathlete participants when stimuli were presented in the peripheral visual field. Experiment 2 showed that older participants who had practiced a martial art at a competitive level when they were young were significantly faster than sedentary older adults of the same age. The practiced sport (judo or karate) did not affect performance differentially, suggesting that it is the practice of martial arts that is the crucial factor, rather than the type of martial art. Importantly, older athletes lose their PV advantage, as compared with young athletes. Finally, we found that physical activity (young and older athletes) and age (young and older adults) did not alter the visual asymmetries that vary as a function of spatial location; all participants were faster for stimuli presented along the horizontal than for those presented along the vertical meridian and for those presented at the lower rather than at the upper locations within the vertical meridian. These results indicate that the practice of these martial arts is an effective way of counteracting the processing speed decline of visual stimuli appearing at any visual location and speed.
Morey, R A; Dunsmoor, J E; Haswell, C C; Brown, V M; Vora, A; Weiner, J; Stjepanovic, D; Wagner, H R; Brancu, Mira; Marx, Christine E; Naylor, Jennifer C; Van Voorhees, Elizabeth; Taber, Katherine H; Beckham, Jean C; Calhoun, Patrick S; Fairbank, John A; Szabo, Steven T; LaBar, K S
2015-01-01
Fear conditioning is an established model for investigating posttraumatic stress disorder (PTSD). However, symptom triggers may vaguely resemble the initial traumatic event, differing on a variety of sensory and affective dimensions. We extended the fear-conditioning model to assess generalization of conditioned fear on fear processing neurocircuitry in PTSD. Military veterans (n=67) consisting of PTSD (n=32) and trauma-exposed comparison (n=35) groups underwent functional magnetic resonance imaging during fear conditioning to a low fear-expressing face while a neutral face was explicitly unreinforced. Stimuli that varied along a neutral-to-fearful continuum were presented before conditioning to assess baseline responses, and after conditioning to assess experience-dependent changes in neural activity. Compared with trauma-exposed controls, PTSD patients exhibited greater post-study memory distortion of the fear-conditioned stimulus toward the stimulus expressing the highest fear intensity. PTSD patients exhibited biased neural activation toward high-intensity stimuli in fusiform gyrus (P<0.02), insula (P<0.001), primary visual cortex (P<0.05), locus coeruleus (P<0.04), thalamus (P<0.01), and at the trend level in inferior frontal gyrus (P=0.07). All regions except fusiform were moderated by childhood trauma. Amygdala–calcarine (P=0.01) and amygdala–thalamus (P=0.06) functional connectivity selectively increased in PTSD patients for high-intensity stimuli after conditioning. In contrast, amygdala–ventromedial prefrontal cortex (P=0.04) connectivity selectively increased in trauma-exposed controls compared with PTSD patients for low-intensity stimuli after conditioning, representing safety learning. In summary, fear generalization in PTSD is biased toward stimuli with higher emotional intensity than the original conditioned-fear stimulus. Functional brain differences provide a putative neurobiological model for fear generalization whereby PTSD symptoms are triggered by threat cues that merely resemble the index trauma. PMID:26670285
Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender
Rozenkrants, Bella; Polich, John
2008-01-01
Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987
Park, Jason C.; McAnany, J. Jason
2015-01-01
This study determined if the pupillary light reflex (PLR) driven by brief stimulus presentations can be accounted for by the product of stimulus luminance and area (i.e., corneal flux density, CFD) under conditions biased toward the rod, cone, and melanopsin pathways. Five visually normal subjects participated in the study. Stimuli consisted of 1-s short- and long-wavelength flashes that spanned a large range of luminance and angular subtense. The stimuli were presented in the central visual field in the dark (rod and melanopsin conditions) and against a rod-suppressing short-wavelength background (cone condition). Rod- and cone-mediated PLRs were measured at the maximum constriction after stimulus onset whereas the melanopsin-mediated PLR was measured 5–7 s after stimulus offset. The rod- and melanopsin-mediated PLRs were well accounted for by CFD, such that doubling the stimulus luminance had the same effect on the PLR as doubling the stimulus area. Melanopsin-mediated PLRs were elicited only by short-wavelength, large (>16°) stimuli with luminance greater than 10 cd/m2, but when present, the melanopsin-mediated PLR was well accounted for by CFD. In contrast, CFD could not account for the cone-mediated PLR because the PLR was approximately independent of stimulus size but strongly dependent on stimulus luminance. These findings highlight important differences in how stimulus luminance and size combine to govern the PLR elicited by brief flashes under rod-, cone-, and melanopsin-mediated conditions. PMID:25788707
The retention and disruption of color information in human short-term visual memory.
Nemes, Vanda A; Parry, Neil R A; Whitaker, David; McKeefry, Declan J
2012-01-27
Previous studies have demonstrated that the retention of information in short-term visual perceptual memory can be disrupted by the presentation of masking stimuli during interstimulus intervals (ISIs) in delayed discrimination tasks (S. Magnussen & W. W. Greenlee, 1999). We have exploited this effect in order to determine to what extent short-term perceptual memory is selective for stimulus color. We employed a delayed hue discrimination paradigm to measure the fidelity with which color information was retained in short-term memory. The task required 5 color normal observers to discriminate between spatially non-overlapping colored reference and test stimuli that were temporally separated by an ISI of 5 s. The points of subjective equality (PSEs) on the resultant psychometric matching functions provided an index of performance. Measurements were made in the presence and absence of mask stimuli presented during the ISI, which varied in hue around the equiluminant plane in DKL color space. For all reference stimuli, we found a consistent mask-induced, hue-dependent shift in PSE compared to the "no mask" conditions. These shifts were found to be tuned in color space, only occurring for a range of mask hues that fell within bandwidths of 29-37 deg. Outside this range, masking stimuli had little or no effect on measured PSEs. The results demonstrate that memory masking for color exhibits selectivity similar to that which has already been demonstrated for other visual attributes. The relatively narrow tuning of these interference effects suggests that short-term perceptual memory for color is based on higher order, non-linear color coding. © ARVO
Gherri, Elena; Eimer, Martin
2011-04-01
The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.
Wijesekara Witharanage, Randika; Rosa, Marcello G. P.
2012-01-01
Background Recent studies on colour discrimination suggest that experience is an important factor in how a visual system processes spectral signals. In insects it has been shown that differential conditioning is important for processing fine colour discriminations. However, the visual system of many insects, including the honeybee, has a complex set of neural pathways, in which input from the long wavelength sensitive (‘green’) photoreceptor may be processed either as an independent achromatic signal or as part of a trichromatic opponent-colour system. Thus, a potential confound of colour learning in insects is the possibility that modulation of the ‘green’ photoreceptor could underlie observations. Methodology/Principal Findings We tested honeybee vision using light emitting diodes centered on 414 and 424 nm wavelengths, which limit activation to the short-wavelength-sensitive (‘UV’) and medium-wavelength-sensitive (‘blue’) photoreceptors. The absolute irradiance spectra of stimuli was measured and modelled at both receptor and colour processing levels, and stimuli were then presented to the bees in a Y-maze at a large visual angle (26°), to ensure chromatic processing. Sixteen bees were trained over 50 trials, using either appetitive differential conditioning (N = 8), or aversive-appetitive differential conditioning (N = 8). In both cases the bees slowly learned to discriminate between the target and distractor with significantly better accuracy than would be expected by chance. Control experiments confirmed that changing stimulus intensity in transfers tests does not significantly affect bee performance, and it was possible to replicate previous findings that bees do not learn similar colour stimuli with absolute conditioning. Conclusion Our data indicate that honeybee colour vision can be tuned to relatively small spectral differences, independent of ‘green’ photoreceptor contrast and brightness cues. We thus show that colour vision is at least partly experience dependent, and behavioural plasticity plays an important role in how bees exploit colour information. PMID:23155394
Crossmodal processing of emotions in alcohol-dependence and Korsakoff syndrome.
Brion, Mélanie; D'Hondt, Fabien; Lannoy, Séverine; Pitel, Anne-Lise; Davidoff, Donald A; Maurage, Pierre
2017-09-01
Decoding emotional information from faces and voices is crucial for efficient interpersonal communication. Emotional decoding deficits have been found in alcohol-dependence (ALC), particularly in crossmodal situations (with simultaneous stimulations from different modalities), but are still underexplored in Korsakoff syndrome (KS). The aim of this study is to determine whether the continuity hypothesis, postulating a gradual worsening of cognitive and brain impairments from ALC to KS, is valid for emotional crossmodal processing. Sixteen KS, 17 ALC and 19 matched healthy controls (CP) had to detect the emotion (anger or happiness) displayed by auditory, visual or crossmodal auditory-visual stimuli. Crossmodal stimuli were either emotionally congruent (leading to a facilitation effect, i.e. enhanced performance for crossmodal condition compared to unimodal ones) or incongruent (leading to an interference effect, i.e. decreased performance for crossmodal condition due to discordant information across modalities). Reaction times and accuracy were recorded. Crossmodal integration for congruent information was dampened only in ALC, while both ALC and KS demonstrated, compared to CP, decreased performance for decoding emotional facial expressions in the incongruent condition. The crossmodal integration appears impaired in ALC but preserved in KS. Both alcohol-related disorders present an increased interference effect. These results show the interest of more ecological designs, using crossmodal stimuli, to explore emotional decoding in alcohol-related disorders. They also suggest that the continuum hypothesis cannot be generalised to emotional decoding abilities.
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity
Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675
Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.
Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D
2017-01-01
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Statistical Regularities Attract Attention when Task-Relevant.
Alamia, Andrea; Zénon, Alexandre
2016-01-01
Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e., the effect of color predictability on reaction times (RTs), and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the two colored dots. We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT. In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.
Neural correlates of tactile perception during pre-, peri-, and post-movement.
Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte
2016-05-01
Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.
Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric
2014-01-01
Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454
Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric
2014-01-01
Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.
Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.
Vicente, Natalin S; Halloy, Monique
2017-12-01
Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude.
Tanaka, Hideaki
2016-01-01
Cosmetic makeup significantly influences facial perception. Because faces consist of similar physical structures, cosmetic makeup is typically used to highlight individual features, particularly those of the eyes (i.e., eye shadow) and mouth (i.e., lipstick). Though event-related potentials have been utilized to study various aspects of facial processing, the influence of cosmetics on specific ERP components remains unclear. The present study aimed to investigate the relationship between the application of cosmetic makeup and the amplitudes of the P1 and N170 event-related potential components during facial perception tasks. Moreover, the influence of visual perception on N170 amplitude, was evaluated under three makeup conditions: Eye Shadow, Lipstick, and No Makeup. Electroencephalography was used to monitor 17 participants who were exposed to visual stimuli under each these three makeup conditions. The results of the present study subsequently demonstrated that the Lipstick condition elicited a significantly greater N170 amplitude than the No Makeup condition, while P1 amplitude was unaffected by any of the conditions. Such findings indicate that the application of cosmetic makeup alters general facial perception but exerts no influence on the perception of low-level visual features. Collectively, these results support the notion that the application of makeup induces subtle alterations in the processing of facial stimuli, with a particular effect on the processing of specific facial components (i.e., the mouth), as reflected by changes in N170 amplitude. PMID:27656161
Duration estimates within a modality are integrated sub-optimally
Cai, Ming Bo; Eagleman, David M.
2015-01-01
Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965
Qian, Ning; Dayan, Peter
2013-01-01
A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217
Marini, Francesco; Marzi, Carlo A.
2016-01-01
The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555
Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.
Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A
2004-11-09
Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A
2015-01-01
Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.
Zold, Camila L.
2015-01-01
The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643
Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne
2017-04-01
We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.
Beck, Joy E; Lipani, Tricia A; Baber, Kari F; Dufton, Lynette; Garber, Judy; Smith, Craig A; Walker, Lynn S
2011-05-01
This study investigated attentional biases for pain and social threat versus neutral stimuli in 54 youth with functional abdominal pain (FAP) and 53 healthy control subjects (ages 10 to 16 years). We assessed attentional bias using a visual probe detection task (PDT) that presented pain and social threat words in comparison to neutral words at conscious (1250 ms) and preconscious (20 ms) presentation rates. We administered the PDT before and after random assignment of participants to a laboratory stressor--failure versus success feedback regarding their performance on a challenging computer game. All analyses controlled for trait anxiety. At the conscious rate of stimulus presentation, FAP patients exhibited preferential attention toward pain compared with neutral stimuli and compared with the control group. FAP patients maintained preferential attention toward conscious pain stimuli after performance feedback in both failure and success conditions. At the preconscious rate of stimulus presentation, FAP patients' attention was neutral at baseline but increased significantly toward pain stimuli after performance feedback in both failure and success conditions. FAP patients' somatic symptoms increased in both failure and success conditions; control youth's somatic symptoms only increased after failure. Regarding social threat, neither FAP nor control youth exhibited attentional bias toward social threat compared with neutral stimuli at baseline, but both FAP and control youth in the failure condition significantly increased attention away from social threat after failure feedback. Results suggest that FAP patients preferentially attend to pain stimuli in conscious awareness. Moreover, performance evaluation may activate their preconscious attention to pain stimuli. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rotenstreich, Ygal; Chibel, Ron; Haj Yahia, Soad; Achiron, Asaf; Mahajna, Mohamad; Belkin, Michael; Sher, Ifat
2015-03-01
We recently demonstrated the feasibility of quantifying pupil responses (PR) to multifocal chromatic light stimuli for objectively assessing visual field (VF). Here we assessed a second-generation chromatic multifocal pupillometer device with 76 LEDs of 18 degree visual field and a smaller spot size (2mm diameter), aimed of achieving better perimetric resolution. A computerized infrared pupillometer was used to record PR to short- and long-wavelength stimuli (peak 485 nm and 640 nm, respectively) presented by 76 LEDs, 1.8mm spot size, at light intensities of 10-1000 cd/m2 at different points of the 18 degree VF. PR amplitude was measured in 11 retinitis pigmentosa (RP) patients and 20 normal agedmatched controls. RP patients demonstrated statistically significant reduced pupil contraction amplitude in majority of perimetric locations under testing conditions that emphasized rod contribution (short-wavelength stimuli at 200 cd/m2) in peripheral locations (p<0.05). By contrast, the amplitude of pupillary responses under testing conditions that emphasized cone cell contribution (long-wavelength stimuli at 1000 cd/m2) were not significantly different between the groups in majority of perimetric locations, particularly in central locations. Minimal pupil contraction was recorded in areas that were non-detected by chromatic Goldmann. This study demonstrates the feasibility of using pupillometerbased chromatic perimetry for objectively assessing VF defects and retinal function in patients with retinal degeneration. This method may be used to distinguish between the damaged cells underlying the VF defect.
ERIC Educational Resources Information Center
Hommuk, Karita; Bachmann, Talis
2009-01-01
The problem of feature binding has been examined under conditions of distributed attention or with spatially dispersed stimuli. We studied binding by asking whether selective attention to a feature of a masked object enables perceptual access to the other features of that object using conditions in which spatial attention was directed at a single…
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
Effects of aging on audio-visual speech integration.
Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric
2014-10-01
This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.
Differential Visual Processing of Animal Images, with and without Conscious Awareness
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106
Differential Visual Processing of Animal Images, with and without Conscious Awareness.
Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David
2016-01-01
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.
Visual working memory capacity for color is independent of representation resolution.
Ye, Chaoxiong; Zhang, Lingcong; Liu, Taosheng; Li, Hong; Liu, Qiang
2014-01-01
The relationship between visual working memory (VWM) capacity and resolution of representation have been extensively investigated. Several recent ERP studies using orientation (or arrow) stimuli suggest that there is an inverse relationship between VWM capacity and representation resolution. However, different results have been obtained in studies using color stimuli. This could be due to important differences in the experimental paradigms used in previous studies. We examined whether the same relationship between capacity and resolution holds for color information. Participants performed a color change detection task while their electroencephalography was recorded. We manipulated representation resolution by asking participants to detect either a salient change (low-resolution) or a subtle change (high-resolution) in color. We used an ERP component known as contralateral delay activity (CDA) to index the amount of information maintained in VWM. The result demonstrated the same pattern for both low- and high-resolution conditions, with no difference between conditions. This result suggests that VWM always represents a fixed number of approximately 3-4 colors regardless of the resolution of representation.
Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena.
Simmonds-Moore, Christine
2014-08-01
Ninety-five participants (32 believers, 30 disbelievers and 33 neutral believers in the paranormal) participated in an experiment comprising one visual and one auditory block of trials. Each block included one ESP, two degraded stimuli and one random trial. Each trial included 8 screens or epochs of "random" noise. Participants entered a guess if they perceived a stimulus or changed their mind about stimulus identity, rated guesses for confidence and made notes during each trial. Believers and disbelievers did not differ in the number of guesses made, or in their ability to detect degraded stimuli. Believers displayed a trend toward making faster guesses for some conditions and significantly higher confidence and more misidentifications concerning guesses than disbelievers. Guesses, misidentifications and faster response latencies were generally more likely in the visual than auditory conditions. ESP performance was no different from chance. ESP performance did not differ between belief groups or sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.
Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline
2013-01-01
The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between groups were not only because of differences in unisensory perception but also because of differences in the process of audiovisual integration per se. Visual reduction led to an increase in the weight of audition, even in cochlear-implanted children, whose perception is generally dominated by vision. This result suggests that the natural bias in favor of vision is not immutable. Audiovisual speech integration partly depends on the experimental situation, which modulates the informational content of the sensory channels and the weight that is awarded to each of them. Consequently, participants, whether deaf with cochlear implants or having normal hearing, not only base their perception on the most reliable modality but also award it an additional weight.
Crown-of-thorns starfish have true image forming vision.
Petie, Ronald; Garm, Anders; Hall, Michael R
2016-01-01
Photoreceptors have evolved numerous times giving organisms the ability to detect light and respond to specific visual stimuli. Studies into the visual abilities of the Asteroidea (Echinodermata) have recently shown that species within this class have a more developed visual sense than previously thought and it has been demonstrated that starfish use visual information for orientation within their habitat. Whereas image forming eyes have been suggested for starfish, direct experimental proof of true spatial vision has not yet been obtained. The behavioural response of the coral reef inhabiting crown-of-thorns starfish (Acanthaster planci) was tested in controlled aquarium experiments using an array of stimuli to examine their visual performance. We presented starfish with various black-and-white shapes against a mid-intensity grey background, designed such that the animals would need to possess true spatial vision to detect these shapes. Starfish responded to black-and-white rectangles, but no directional response was found to black-and-white circles, despite equal areas of black and white. Additionally, we confirmed that starfish were attracted to black circles on a white background when the visual angle is larger than 14°. When changing the grey tone of the largest circle from black to white, we found responses to contrasts of 0.5 and up. The starfish were attracted to the dark area's of the visual stimuli and were found to be both attracted and repelled by the visual targets. For crown-of-thorns starfish, visual cues are essential for close range orientation towards objects, such as coral boulders, in the wild. These visually guided behaviours can be replicated in aquarium conditions. Our observation that crown-of-thorns starfish respond to black-and-white shapes on a mid-intensity grey background is the first direct proof of true spatial vision in starfish and in the phylum Echinodermata.
Repetition Blindness for Natural Images of Objects with Viewpoint Changes
Buffat, Stéphane; Plantier, Justin; Roumes, Corinne; Lorenceau, Jean
2013-01-01
When stimuli are repeated in a rapid serial visual presentation (RSVP), observers sometimes fail to report the second occurrence of a target. This phenomenon is referred to as “repetition blindness” (RB). We report an RSVP experiment with photographs in which we manipulated object viewpoints between the first and second occurrences of a target (0°, 45°, or 90° changes), and spatial frequency (SF) content. Natural images were spatially filtered to produce low, medium, or high SF stimuli. RB was observed for all filtering conditions. Surprisingly, for full-spectrum (FS) images, RB increased significantly as the viewpoint reached 90°. For filtered images, a similar pattern of results was found for all conditions except for medium SF stimuli. These findings suggest that object recognition in RSVP are subtended by viewpoint-specific representations for all spatial frequencies except medium ones. PMID:23346069
Seno, Takeharu; Fukuda, Haruaki
2012-01-01
Over the last 100 years, numerous studies have examined the effective visual stimulus properties for inducing illusory self-motion (known as vection). This vection is often experienced more strongly in daily life than under controlled experimental conditions. One well-known example of vection in real life is the so-called 'train illusion'. In the present study, we showed that this train illusion can also be generated in the laboratory using virtual computer graphics-based motion stimuli. We also demonstrated that this vection can be modified by altering the meaning of the visual stimuli (i.e., top down effects). Importantly, we show that the semantic meaning of a stimulus can inhibit or facilitate vection, even when there is no physical change to the stimulus.
Ellenbogen, Mark A; Schwartzman, Alex E
2009-02-01
Although it is well established that attentional biases exist in anxious populations, the specific components of visual orienting towards and away from emotional stimuli are not well delineated. The present study was designed to examine these processes. We used a modified spatial cueing task to assess the speed of engagement and disengagement from supraliminal and masked pictorial cues depicting threat, dysphoria, or neutral content in 36 clinically anxious, 41 depressed and 41 control participants. Participants were randomly assigned to a stress or neutral condition. During stress, anxious participants were slow to disengage from masked left hemifield pictures depicting threat or dysphoria, but were quick to disengage from supraliminal threat pictures. Information processing in anxious participants during stress was characterized by early selective attention of emotional stimuli, occurring prior to full conscious awareness, followed by effortful avoidance of threat. Depressed participants were distinct from the anxious group, displaying selective attention for stimuli depicting dysphoria, but not threat, during the neutral condition. In sum, attentional biases in clinical populations are associated with difficulties in the disengagement component of visual orienting. Further, a vigilant-avoidant pattern of attentional bias may represent a strategic attempt to compensate for the early activation of a fear response.
Hess, R F; Mansouri, B; Thompson, B
2010-01-01
The present treatments for amblyopia are predominantly monocular aiming to improve the vision in the amblyopic eye through either patching of the fellow fixing eye or visual training of the amblyopic eye. This approach is problematic, not least of which because it rarely results in establishment of binocular function. Recently it has shown that amblyopes possess binocular cortical mechanisms for both threshold and suprathreshold stimuli. We outline a novel procedure for measuring the extent to which the fixing eye suppresses the fellow amblyopic eye, rendering what is a structurally binocular system, functionally monocular. Here we show that prolonged periods of viewing (under the artificial conditions of stimuli of different contrast in each eye) during which information from the two eyes is combined leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Furthermore, in a majority of patients tested, stereoscopic function is established. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.
Wills, A J; Lea, Stephen E G; Leaver, Lisa A; Osthaus, Britta; Ryan, Catriona M E; Suret, Mark B; Bryant, Catherine M L; Chapman, Sue J A; Millar, Louise
2009-11-01
Pigeons (Columba livia), gray squirrels (Sciurus carolinensis), and undergraduates (Homo sapiens) learned discrimination tasks involving multiple mutually redundant dimensions. First, pigeons and undergraduates learned conditional discriminations between stimuli composed of three spatially separated dimensions, after first learning to discriminate the individual elements of the stimuli. When subsequently tested with stimuli in which one of the dimensions took an anomalous value, the majority of both species categorized test stimuli by their overall similarity to training stimuli. However some individuals of both species categorized them according to a single dimension. In a second set of experiments, squirrels, pigeons, and undergraduates learned go/no-go discriminations using multiple simultaneous presentations of stimuli composed of three spatially integrated, highly salient dimensions. The tendency to categorize test stimuli including anomalous dimension values unidimensionally was higher than in the first set of experiments and did not differ significantly between species. The authors conclude that unidimensional categorization of multidimensional stimuli is not diagnostic for analytic cognitive processing, and that any differences between human's and pigeons' behavior in such tasks are not due to special features of avian visual cognition.
Pearce, John M; Redhead, Edward S; George, David N
2002-04-01
Pigeons received autoshaping with 2 stimuli, A and B, presented in adjacent regions on a television screen. Conditioning with each stimulus was therefore accompanied by stimulation that was displaced from the screen whenever the other stimulus was presented. Test trials with AB revealed stronger responding if this displaced stimulation was similar to, rather than different from, A and B. For a further experiment the training just described included trials with A and B accompanied by an additional, similar, stimulus. Responding during test trials with AB was stronger if the additional trials signaled the presence rather than the absence of food. The results are explained with a configural theory of conditioning.
Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A
2015-07-08
Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.
Relativistic compression and expansion of experiential time in the left and right space.
Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano
2008-03-05
Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
Tien, Nai-Wen; Pearson, James T.; Heller, Charles R.; Demas, Jay
2015-01-01
Spike trains of retinal ganglion cells (RGCs) are the sole source of visual information to the brain; and understanding how the ∼20 RGC types in mammalian retinae respond to diverse visual features and events is fundamental to understanding vision. Suppressed-by-contrast (SbC) RGCs stand apart from all other RGC types in that they reduce rather than increase firing rates in response to light increments (ON) and decrements (OFF). Here, we genetically identify and morphologically characterize SbC-RGCs in mice, and target them for patch-clamp recordings under two-photon guidance. We find that strong ON inhibition (glycine > GABA) outweighs weak ON excitation, and that inhibition (glycine > GABA) coincides with decreases in excitation at light OFF. These input patterns explain the suppressive spike responses of SbC-RGCs, which are observed in dim and bright light conditions. Inhibition to SbC-RGC is driven by rectified receptive field subunits, leading us to hypothesize that SbC-RGCs could signal pattern-independent changes in the retinal image. Indeed, we find that shifts of random textures matching saccade-like eye movements in mice elicit robust inhibitory inputs and suppress spiking of SbC-RGCs over a wide range of texture contrasts and spatial frequencies. Similarly, stimuli based on kinematic analyses of mouse blinking consistently suppress SbC-RGC spiking. Receiver operating characteristics show that SbC-RGCs are reliable indicators of self-generated visual stimuli that may contribute to central processing of blinks and saccades. SIGNIFICANCE STATEMENT This study genetically identifies and morphologically characterizes suppressed-by-contrast retinal ganglion cells (SbC-RGCs) in mice. Targeted patch-clamp recordings from SbC-RGCs under two-photon guidance elucidate the synaptic mechanisms mediating spike suppression to contrast steps, and reveal that SbC-RGCs respond reliably to stimuli mimicking saccade-like eye movements and blinks. The similarity of responses to saccade-like eye movements and blinks suggests that SbC-RGCs may provide a unified signal for self-generated visual stimuli. PMID:26224863
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Examining the cognitive demands of analogy instructions compared to explicit instructions.
Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich
2016-10-01
In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.
Postural time-to-contact as a precursor of visually induced motion sickness.
Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A
2018-06-01
The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.
Spatial Scaling of the Profile of Selective Attention in the Visual Field.
Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A
2016-01-01
Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.
Visual field asymmetries in visual evoked responses
Hagler, Donald J.
2014-01-01
Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151
Mirror me: Imitative responses in adults with autism.
Schunke, Odette; Schöttle, Daniel; Vettorazzi, Eik; Brandt, Valerie; Kahl, Ursula; Bäumer, Tobias; Ganos, Christos; David, Nicole; Peiker, Ina; Engel, Andreas K; Brass, Marcel; Münchau, Alexander
2016-02-01
Dysfunctions of the human mirror neuron system have been postulated to underlie some deficits in autism spectrum disorders including poor imitative performance and impaired social skills. Using three reaction time experiments addressing mirror neuron system functions under simple and complex conditions, we examined 20 adult autism spectrum disorder participants and 20 healthy controls matched for age, gender and education. Participants performed simple finger-lifting movements in response to (1) biological finger and non-biological dot movement stimuli, (2) acoustic stimuli and (3) combined visual-acoustic stimuli with different contextual (compatible/incompatible) and temporal (simultaneous/asynchronous) relation. Mixed model analyses revealed slower reaction times in autism spectrum disorder. Both groups responded faster to biological compared to non-biological stimuli (Experiment 1) implying intact processing advantage for biological stimuli in autism spectrum disorder. In Experiment 3, both groups had similar 'interference effects' when stimuli were presented simultaneously. However, autism spectrum disorder participants had abnormally slow responses particularly when incompatible stimuli were presented consecutively. Our results suggest imitative control deficits rather than global imitative system impairments. © The Author(s) 2015.
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
NASA Technical Reports Server (NTRS)
Clark, B.; Stewart, J. D.
1974-01-01
This experiment was concerned with the effects of rotary acceleration on choice reaction time (RTc) to the motion of a luminous line on a cathode-ray tube. Specifically, it compared the (RTc) to rotary acceleration alone, visual acceleration alone, and simultaneous, double stimulation by both rotary and visual acceleration. Thirteen airline pilots were rotated about an earth-vertical axis in a precision rotation device while they observed a vertical line. The stimuli were 7 rotary and visual accelerations which were matched for rise time. The pilot responded as quickly as possible by displacing a vertical controller to the right or left. The results showed a decreasing (RTc) with increasing acceleration for all conditions, while the (RTc) to rotary motion alone was substantially longer than for all other conditions. The (RTc) to the double stimulation was significantly longer than that for visual acceleration alone.
Residual attention guidance in blindsight monkeys watching complex natural scenes.
Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi
2012-08-07
Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.
Teramoto, Wataru; Honda, Keito; Furuta, Kento; Sekiyama, Kaoru
2017-08-01
Spatial proximity of signals from different sensory modalities is known to be a crucial factor in facilitating efficient multisensory processing in young adults. However, recent studies have demonstrated that older adults exhibit strong visuotactile interactions even when the visual stimuli were presented in a spatially disparate position from a tactile stimulus. This suggests that visuotactile peripersonal space differs between older and younger adults. In the present study, we investigated to what extent peripersonal space expands in the sagittal direction and whether this expansion was linked to the decline in sensorimotor functions in older adults. Vibrotactile stimuli were delivered either to the left or right index finger, while visual stimuli were presented at a distance of 5 cm (near), 37.5 cm (middle), or 70 cm (far) from each finger. The participants had to respond rapidly to a randomized sequence of unimodal (visual or tactile) and simultaneous visuotactile targets (i.e., a redundant target paradigm). Sensorimotor functions were independently assessed by the Timed Up and Go (TUG) and postural stability tests. Results showed that reaction times to the visuotactile bimodal stimuli were significantly faster than those to the unimodal stimuli, irrespective of age group [younger adults: 22.0 ± 0.6 years, older adults: 75.0 ± 3.3 years (mean ± SD)] and target distance. Of importance, a race model analysis revealed that the co-activation model (i.e., visuotactile multisensory integrative process) is supported in the far condition especially for older adults with relatively poor performance on the TUG or postural stability tests. These results suggest that aging can change visuotactile peripersonal space and that it may be closely linked to declines in sensorimotor functions related to gait and balance in older adults.
Modulation of early cortical processing during divided attention to non-contiguous locations.
Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J
2014-05-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Endogenous Sequential Cortical Activity Evoked by Visual Stimuli
Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael
2015-01-01
Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915
Neural correlates of the food/non-food visual distinction.
Tsourides, Kleovoulos; Shariat, Shahriar; Nejati, Hossein; Gandhi, Tapan K; Cardinaux, Annie; Simons, Christopher T; Cheung, Ngai-Man; Pavlovic, Vladimir; Sinha, Pawan
2016-03-01
An evolutionarily ancient skill we possess is the ability to distinguish between food and non-food. Our goal here is to identify the neural correlates of visually driven 'edible-inedible' perceptual distinction. We also investigate correlates of the finer-grained likability assessment. Our stimuli depicted food or non-food items with sub-classes of appealing or unappealing exemplars. Using data-classification techniques drawn from machine-learning, as well as evoked-response analyses, we sought to determine whether these four classes of stimuli could be distinguished based on the patterns of brain activity they elicited. Subjects viewed 200 images while in a MEG scanner. Our analyses yielded two successes and a surprising failure. The food/non-food distinction had a robust neural counterpart and emerged as early as 85 ms post-stimulus onset. The likable/non-likable distinction too was evident in the neural signals when food and non-food stimuli were grouped together, or when only the non-food stimuli were included in the analyses. However, we were unable to identify any neural correlates of this distinction when limiting the analyses only to food stimuli. Taken together, these positive and negative results further our understanding of the substrates of a set of ecologically important judgments and have clinical implications for conditions like eating-disorders and anhedonia. Copyright © 2016 Elsevier B.V. All rights reserved.
Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.
2013-01-01
The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939
Compatibility of Motion Facilitates Visuomotor Synchronization
ERIC Educational Resources Information Center
Hove, Michael J.; Spivey, Michael J.; Krumhansl, Carol L.
2010-01-01
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1,…
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Kawase, Saya; Hannah, Beverly; Wang, Yue
2014-09-01
This study examines how visual speech information affects native judgments of the intelligibility of speech sounds produced by non-native (L2) speakers. Native Canadian English perceivers as judges perceived three English phonemic contrasts (/b-v, θ-s, l-ɹ/) produced by native Japanese speakers as well as native Canadian English speakers as controls. These stimuli were presented under audio-visual (AV, with speaker voice and face), audio-only (AO), and visual-only (VO) conditions. The results showed that, across conditions, the overall intelligibility of Japanese productions of the native (Japanese)-like phonemes (/b, s, l/) was significantly higher than the non-Japanese phonemes (/v, θ, ɹ/). In terms of visual effects, the more visually salient non-Japanese phonemes /v, θ/ were perceived as significantly more intelligible when presented in the AV compared to the AO condition, indicating enhanced intelligibility when visual speech information is available. However, the non-Japanese phoneme /ɹ/ was perceived as less intelligible in the AV compared to the AO condition. Further analysis revealed that, unlike the native English productions, the Japanese speakers produced /ɹ/ without visible lip-rounding, indicating that non-native speakers' incorrect articulatory configurations may decrease the degree of intelligibility. These results suggest that visual speech information may either positively or negatively affect L2 speech intelligibility.
Lasers' spectral and temporal profile can affect visual glare disability.
Beer, Jeremy M A; Freeman, David A
2012-12-01
Experiments measured the effects of laser glare on visual orientation and motion perception. Laser stimuli were varied according to spectral composition and temporal presentation as subjects identified targets' tilt (Experiment 1) and movement (Experiment 2). The objective was to determine whether the glare parameters would alter visual disruption. Three spectral profiles (monochromatic Green vs. polychromatic White vs. alternating Red-Green) were used to produce a ring of laser glare surrounding a target. Two experiments were performed to measure the minimum contrast required to report target orientation or motion direction. The temporal glare profile was also varied: the ring was illuminated either continuously or discontinuously. Time-averaged luminance of the glare stimuli was matched across all conditions. In both experiments, threshold (deltaL) values were approximately 0.15 log units higher in monochromatic Green than in polychromatic White conditions. In Experiment 2 (motion identification), thresholds were approximately 0.17 log units higher in rapidly flashing (6, 10, or 14 Hz) than in continuous exposure conditions. Monochromatic extended-source laser glare disrupted orientation and motion identification more than polychromatic glare. In the motion task, pulse trains faster than 6 Hz (but below flicker fusion) elevated thresholds more than continuous glare with the same time-averaged luminance. Under these conditions, alternating the wavelength of monochromatic glare over time did not aggravate disability relative to green-only glare. Repetitively flashing monochromatic laser glare induced occasional episodes of impaired motion identification, perhaps resulting from cognitive interference. Interference speckle might play a role in aggravating monochromatic glare effects.
Ohyanagi, Toshio; Sengoku, Yasuhito
2010-02-01
This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.
Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.
Alexander, Gerianne M; Charles, Nora
2009-06-01
An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.
Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei
2014-09-19
Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.
2013-01-01
At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248
Brooks, S J; Savov, V; Allzén, E; Benedict, C; Fredriksson, R; Schiöth, H B
2012-02-01
Functional Magnetic Resonance Imaging (fMRI) demonstrates that the subliminal presentation of arousing stimuli can activate subcortical brain regions independently of consciousness-generating top-down cortical modulation loops. Delineating these processes may elucidate mechanisms for arousal, aberration in which may underlie some psychiatric conditions. Here we are the first to review and discuss four Activation Likelihood Estimation (ALE) meta-analyses of fMRI studies using subliminal paradigms. We find a maximum of 9 out of 12 studies using subliminal presentation of faces contributing to activation of the amygdala, and also a significantly high number of studies reporting activation in the bilateral anterior cingulate, bilateral insular cortex, hippocampus and primary visual cortex. Subliminal faces are the strongest modality, whereas lexical stimuli are the weakest. Meta-analyses independent of studies using Regions of Interest (ROI) revealed no biasing effect. Core neuronal arousal in the brain, which may be at first independent of conscious processing, potentially involves a network incorporating primary visual areas, somatosensory, implicit memory and conflict monitoring regions. These data could provide candidate brain regions for the study of psychiatric disorders associated with aberrant automatic emotional processing. Copyright © 2011 Elsevier Inc. All rights reserved.
Are face representations depth cue invariant?
Dehmoobadsharifabadi, Armita; Farivar, Reza
2016-06-01
The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.
Response-specifying cue for action interferes with perception of feature-sharing stimuli.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-06-01
Perceiving a visual stimulus is more difficult when a to-be-executed action is compatible with that stimulus, which is known as blindness to response-compatible stimuli. The present study explored how the factors constituting the action event (i.e., response-specifying cue, response intention, and response feature) affect the occurrence of this blindness effect. The response-specifying cue varied along the horizontal and vertical dimensions, while the response buttons were arranged diagonally. Participants responded based on one dimension randomly determined in a trial-by-trial manner. The response intention varied along a single dimension, whereas the response location and the response-specifying cue varied within both vertical and horizontal dimensions simultaneously. Moreover, the compatibility between the visual stimulus and the response location and the compatibility between that stimulus and the response-specifying cue was separately determined. The blindness effect emerged exclusively based on the feature correspondence between the response-specifying cue of the action task and the visual target of the perceptual task. The size of this stimulus-stimulus (S-S) blindness effect did not differ significantly across conditions, showing no effect of response intention and response location. This finding emphasizes the effect of stimulus factors, rather than response factors, of the action event as a source of the blindness to response-compatible stimuli.
The effect of non-visual working memory load on top-down modulation of visual processing
Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark
2009-01-01
While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially-ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-sec delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley et al., (2005) Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism. PMID:19397858
Suárez, H; Musé, P; Suárez, A; Arocena, M
2001-01-01
In order to assess the influence of visual stimulation in the triggering of imbalance and falls in the elderly population, the postural responses of 18 elderly patients with central vestibular disorders and clinical evidence of instability and falls were studied while receiving different types of visual stimuli. The stimulation conditions were: (i) no specific stimuli; (ii) smooth pursuit with pure sinusoids of 0.2 Hz as foveal stimulation; and (iii) optokinetic stimulation (OK) as retinal stimuli. Using a platform AMTI Accusway platform, the 95% confidence ellipse (CE) and sway velocity (SV) were evaluated with a scalogram using wavelets in order to assess the relationship between time and frequency in postural control. Velocity histograms were also constructed in order to observe the distribution of velocity values during the recording. A non-homogeneous postural behavior after visual stimulation was found among this population. In five of the patients the OK stimulation generated: (i) significantly higher average values of CE ( > 3.4+/-0.69 cm2); (ii) a significant increase in the average values of the SV ( > 3.89+/-1.15 cm/s) and a velocity histogram with a homogeneous distribution between 0 and 18 cm/s; and (iii) a scalogram with sway frequencies of up to 4 Hz distributed in both the X and Y directions (backwards and forwards and lateral) during visual stimulation with arbitrary units of energy density > 5. These three qualitative and quantitative aspects could be "markers" of visual dependence in the triggering of the mechanism of lack of equilibrium and hence falls in some elderly patients and should be considered in order to prevent falls and also to assist in the rehabilitation program of these patients.
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
USDA-ARS?s Scientific Manuscript database
Drosophila suzukii (Matsumura) (Diptera: Drosophilidae) is an invasive pest in the United States that attacks soft-skinned ripening fruit such as raspberries, blackberries, and blueberries. Little is known regarding specific cues D. suzukii utilizes to locate and select host fruit, and inconsistenc...
Walter, Sabrina; Quigley, Cliodhna; Mueller, Matthias M
2014-05-01
Performing a task across the left and right visual hemifields results in better performance than in a within-hemifield version of the task, termed the different-hemifield advantage. Although recent studies used transient stimuli that were presented with long ISIs, here we used a continuous objective electrophysiological (EEG) measure of competitive interactions for attentional processing resources in early visual cortex, the steady-state visual evoked potential (SSVEP). We frequency-tagged locations in each visual quadrant and at central fixation by flickering light-emitting diodes (LEDs) at different frequencies to elicit distinguishable SSVEPs. Stimuli were presented for several seconds, and participants were cued to attend to two LEDs either in one (Within) or distributed across left and right visual hemifields (Across). In addition, we introduced two reference measures: one for suppressive interactions between the peripheral LEDs by using a task at fixation where attention was withdrawn from the periphery and another estimating the upper bound of SSVEP amplitude by cueing participants to attend to only one of the peripheral LEDs. We found significantly greater SSVEP amplitude modulations in Across compared with Within hemifield conditions. No differences were found between SSVEP amplitudes elicited by the peripheral LEDs when participants attended to the centrally located LEDs compared with when peripheral LEDs had to be ignored in Across and Within trials. Attending to only one LED elicited the same SSVEP amplitude as Across conditions. Although behavioral data displayed a more complex pattern, SSVEP amplitudes were well in line with the predictions of the different-hemifield advantage account during sustained visuospatial attention.
Gast, Anne; Langer, Sebastian; Sengewald, Marie-Ann
2016-10-01
Evaluative conditioning (EC) is a change in valence that is due to pairing a conditioned stimulus (CS) with another, typically valent, unconditioned stimulus (US). This paper investigates how basic presentation parameters moderate EC effects. In two studies we tested the effectiveness of different temporal relations of the CS and the US, that is, the order in which the stimuli were presented and the temporal distance between them. Both studies showed that the size of EC effects was independent of the presentation order of CS and US within a stimulus pair. Contrary to classical conditioning effects, EC effects are thus not most pronounced after CS-first presentations. Furthermore, as shown in Experiment 2, EC effects increased in magnitude as the temporal interval between CS and US presentations decreased. Experiment 1 showed largest EC effects in the condition with simultaneous presentations - which can be seen as the condition with the temporally closest presentation. In this Experiment stimuli were presented in two different modalities, which might have facilitated simultaneous processing. In Experiment 2, in which all stimuli were presented visually, this advantage of simultaneous presentation was not found. We discuss practical and theoretical implications of our findings. Copyright © 2016 Elsevier B.V. All rights reserved.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
Sex Differences in Response to Visual Sexual Stimuli: A Review
Rupp, Heather A.; Wallen, Kim
2009-01-01
This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311
Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images
Funahashi, Shintaro
2016-01-01
Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479
Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel
2012-01-01
Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.
The contribution of dynamic visual cues to audiovisual speech perception.
Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador
2015-08-01
Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.
Colour cues proved to be more informative for dogs than brightness.
Kasparson, Anna A; Badridze, Jason; Maximov, Vadim V
2013-09-07
The results of early studies on colour vision in dogs led to the conclusion that chromatic cues are unimportant for dogs during their normal activities. Nevertheless, the canine retina possesses two cone types which provide at least the potential for colour vision. Recently, experiments controlling for the brightness information in visual stimuli demonstrated that dogs have the ability to perform chromatic discrimination. Here, we show that for eight previously untrained dogs colour proved to be more informative than brightness when choosing between visual stimuli differing both in brightness and chromaticity. Although brightness could have been used by the dogs in our experiments (unlike previous studies), it was not. Our results demonstrate that under natural photopic lighting conditions colour information may be predominant even for animals that possess only two spectral types of cone photoreceptors.
Cycowicz, Yael M; Friedman, David
2007-01-01
The orienting response, the brain's reaction to novel and/or out of context familiar events, is reflected by the novelty P3 of the ERP. Contextually novel events also engender high rates of recognition memory. We examined, under incidental and intentional conditions, the effects of visual symbol familiarity on the novelty P3 recorded during an oddball task and on the parietal episodic memory (EM) effect, an index of recollection. Repetition of familiar, but not unfamiliar, symbols elicited a reduction in the novelty P3. Better recognition performance for the familiar symbols was associated with a robust parietal EM effect, which was absent for the unfamiliar symbols in the incidental task. These data demonstrate that processing of novel events depends on expectation and whether stimuli have preexisting representations in long-term semantic memory.
Differences in apparent straightness of dot and line stimuli.
NASA Technical Reports Server (NTRS)
Parlee, M. B.
1972-01-01
An investigation has been made of anisotropic responses to contoured and noncontoured stimuli to obtain an insight into the way these stimuli are processed. For this purpose, eight subjects judged the alignment of minimally contoured (3 dot) and contoured (line) stimuli. Stimuli, presented to each eye separately, vertically subtended either 8 or 32 deg visual angle and were located 10 deg left, center, or 10 deg right in the visual field. Location-dependent deviations from physical straightness were larger for dot stimuli than for lines. The results were the same for the two eyes. In a second experiment, subjects judged the alignment of stimuli composed of different densities of dots. Apparent straightness for these stimuli was the same as for lines. The results are discussed in terms of alternative mechanisms for analysis of contoured and minimally contoured stimuli.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Top-down and bottom-up competition in visual stimuli processing.
Ligeza, Tomasz S; Tymorek, Agnieszka D; Wyczesany, Mirosław
2017-01-01
Limited attention capacity results that not all the stimuli present in the visual field are equally processed. While processing of salient stimuli is automatically boosted by bottom‑up attention, processing of task‑relevant stimuli can be boosted volitionally by top‑down attention. Usually, both top‑down and bottom‑up influences are present simultaneously, which creates a competition between these two types of attention. We examined this competition using both behavioral and electrophysiological measures. Participants responded to letters superimposed on background pictures. We assumed that responding to different conditions of the letter task engages top‑down attention to different extent, whereas processing of background pictures of varying salience engages bottom‑up attention to different extent. To check how manipulation of top‑down attention influences bottom‑up processing, we measured evoked response potentials (ERPs) in response to pictures (engaging mostly bottom‑up attention) during three conditions of a letter task (different levels of top‑down engagement). Conversely, to check how manipulation of bottom‑up attention influences top‑down processing, we measured ERP responses for letters (engaging mostly top‑down attention) while manipulating the salience of background pictures (different levels of bottom‑up engagement). The correctness and reaction times in response to letters were also analyzed. As expected, most of the ERPs and behavioral measures revealed a trade‑off between both types of processing: a decrease of bottom‑up processing was associated with an increase of top‑down processing and, similarly, a decrease of top‑down processing was associated with an increase in bottom‑up processing. Results proved competition between the two types of attentions.
Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice
2018-02-01
Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.
Short-term retention of pictures and words: evidence for dual coding systems.
Pellegrino, J W; Siegel, A W; Dhawan, M
1975-03-01
The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.
Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza
2016-05-01
Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro
2013-01-01
Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.
Mohamed, Saleh M H; Börger, Norbert A; Geuze, Reint H; van der Meere, Jaap J
2016-10-01
Many clinical studies have shown that performance of subjects with attention-deficit/hyperactivity disorder (ADHD) is impaired when stimuli are presented at a slow rate compared to a medium or fast rate. According to the cognitive-energetic model, this finding may reflect difficulty in allocating sufficient effort to regulate the motor activation state. Other studies have shown that the left hemisphere is relatively responsible for keeping humans motivated, allocating sufficient effort to complete their tasks. This leads to a prediction that poor effort allocation might be associated with an affected left-hemisphere functioning in ADHD. So far, this prediction has not been directly tested, which is the aim of the present study. Seventy-seven adults with various scores on the Conners' Adult ADHD Rating Scale performed a lateralized lexical decision task in three conditions with stimuli presented in a fast, a medium, and a slow rate. The left-hemisphere functioning was measured in terms of visual field advantage (better performance for the right than for the left visual field). All subjects showed an increased right visual field advantage for word processing in the slow presentation rate of stimuli compared to the fast and the medium rate. Higher ADHD scores were related to a reduced right visual field advantage in the slow rate only. The present findings suggest that ADHD symptomatology is associated with less involvement of the left hemisphere when extra effort allocation is needed to optimize the low motor activation state.
Memory for pictures and sounds: independence of auditory and visual codes.
Thompson, V A; Paivio, A
1994-09-01
Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.
Simulating hemispatial neglect with virtual reality.
Baheux, Kenji; Yoshizawa, Makoto; Yoshida, Yasuko
2007-07-19
Hemispatial neglect is a cognitive disorder defined as a lack of attention for stimuli contra-lateral to the brain lesion. The assessment is traditionally done with basic pencil and paper tests and the rehabilitation programs are generally not well adapted. We propose a virtual reality system featuring an eye-tracking device for a better characterization of the neglect that will lead to new rehabilitation techniques. This paper presents a comparison of eye-gaze patterns of healthy subjects, patients and healthy simulated patients on a virtual line bisection test. The task was also executed with a reduced visual field condition hoping that fewer stimuli would limit the neglect. We found that patients and healthy simulated patients had similar eye-gaze patterns. However, while the reduced visual field condition had no effect on the healthy simulated patients, it actually had a negative impact on the patients. We discuss the reasons for these differences and how they relate to the limitations of the neglect simulation. We argue that with some improvements the technique could be used to determine the potential of new rehabilitation techniques and also help the rehabilitation staff or the patient's relatives to better understand the neglect condition.
Scholes, Kirsty E; Martin-Iverson, Mathew T
2010-03-01
Controversy exists as to the cause of disturbed prepulse inhibition (PPI) in patients with schizophrenia. This study aimed to clarify the nature of PPI in schizophrenia using improved methodology. Startle and PPI were measured in 44 patients with schizophrenia and 32 controls across a range of startling stimulus intensities under two conditions, one while participants were attending to the auditory stimuli (ATTEND condition) and one while participants completed a visual task in order to ensure they were ignoring the auditory stimuli (IGNORE condition). Patients showed reduced PPI of R(MAX) (reflex capacity) and increased PPI of Hillslope (reflex efficacy) only under the INGORE condition, and failed to show the same pattern of attentional modulation of the reflex parameters as controls. In conclusion, disturbed PPI in schizophrenia appears to result from deficits in selective attention, rather than from preattentive dysfunction.
Order of stimulus presentation influences children's acquisition in receptive identification tasks.
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-03-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.
Language identification from visual-only speech signals
Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.
2010-01-01
Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804
The relationship between level of autistic traits and local bias in the context of the McGurk effect
Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio
2015-01-01
The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705
Topographic brain mapping of emotion-related hemisphere asymmetries.
Roschmann, R; Wittling, W
1992-03-01
The study used topographic brain mapping of visual evoked potentials to investigate emotion-related hemisphere asymmetries. The stimulus material consisted of color photographs of human faces, grouped into two emotion-related categories: normal faces (neutral stimuli) and faces deformed by dermatological diseases (emotional stimuli). The pictures were presented tachistoscopically to 20 adult right-handed subjects. Brain activity was recorded by 30 EEG electrodes with linked ears as reference. The waveforms were averaged separately with respect to each of the two stimulus conditions. Statistical analysis by means of significance probability mapping revealed significant differences between stimulus conditions for two periods of time, indicating right hemisphere superiority in emotion-related processing. The results are discussed in terms of a 2-stage-model of emotional processing in the cerebral hemispheres.
Virtual reality stimuli for force platform posturography.
Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko
2002-01-01
People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.
Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.
Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J
2017-01-01
We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.
Locus and persistence of capacity limitations in visual information processing.
Kleiss, J A; Lane, D M
1986-05-01
Although there is considerable evidence that stimuli such as digits and letters are extensively processed in parallel and without capacity limitations, recent data suggest that only the features of stimuli are processed in parallel. In an attempt to reconcile this discrepancy, we used the simultaneous/successive detection paradigm with stimuli from experiments indicating parallel processing and with stimuli from experiments indicating that only features can be processed in parallel. In Experiment 1, large differences between simultaneous and successive presentations were obtained with an R target among P and Q distractors and among P and B distractors, but not with digit targets among letter distractors. As predicted by the feature integration theory of attention, false-alarm rates in the simultaneous condition were much higher than in the successive condition with the R/PQ stimuli. In Experiment 2, the possibility that attention is required for any difficult discrimination was ruled out as an explanation of the discrepancy between the digit/letter results and the R/PQ and R/PB results. Experiment 3A replicated the R/PQ and R/PB results of Experiment 1, and Experiment 3B extended these findings to a new set of stimuli. In Experiment 4, we found that large amounts of consistent practice did not generally eliminate capacity limitations. From this series of experiments we strongly conclude that the notion of capacity-free letter perception has limited generality.
Mundy, Matthew E
2014-01-01
Explanations for the cognitive basis of the Müller-Lyer illusion are still frustratingly mixed. To date, Day's (1989) theory of perceptual compromise has received little empirical attention. In this study, we examine the merit of Day's hypothesis for the Müller-Lyer illusion by biasing participants toward global or local visual processing through exposure to Navon (1977) stimuli, which are known to alter processing level preference for a short time. Participants (N = 306) were randomly allocated to global, local, or control conditions. Those in global or local conditions were exposed to Navon stimuli for 5 min and participants were required to report on the global or local stimulus features, respectively. Subsequently, participants completed a computerized Müller-Lyer experiment where they adjusted the length of a line to match an illusory-figure. The illusion was significantly stronger for participants with a global bias, and significantly weaker for those with a local bias, compared with the control condition. These findings provide empirical support for Day's "conflicting cues" theory of perceptual compromise in the Müller-Lyer illusion.
Rutishauser, Ueli; Kotowicz, Andreas; Laurent, Gilles
2013-01-01
Brain activity often consists of interactions between internal—or on-going—and external—or sensory—activity streams, resulting in complex, distributed patterns of neural activity. Investigation of such interactions could benefit from closed-loop experimental protocols in which one stream can be controlled depending on the state of the other. We describe here methods to present rapid and precisely timed visual stimuli to awake animals, conditional on features of the animal’s on-going brain state; those features are the presence, power and phase of oscillations in local field potentials (LFP). The system can process up to 64 channels in real time. We quantified its performance using simulations, synthetic data and animal experiments (chronic recordings in the dorsal cortex of awake turtles). The delay from detection of an oscillation to the onset of a visual stimulus on an LCD screen was 47.5 ms and visual-stimulus onset could be locked to the phase of ongoing oscillations at any frequency ≤40 Hz. Our software’s architecture is flexible, allowing on-the-fly modifications by experimenters and the addition of new closed-loop control and analysis components through plugins. The source code of our system “StimOMatic” is available freely as open-source. PMID:23473800
ERIC Educational Resources Information Center
Guo, Jing; McLeod, Poppy Lauretta
2014-01-01
Drawing upon the Search for Ideas in Associative Memory (SIAM) model as the theoretical framework, the impact of heterogeneity and topic relevance of visual stimuli on ideation performance was examined. Results from a laboratory experiment showed that visual stimuli increased productivity and diversity of idea generation, that relevance to the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-25
.... Acoustic and visual stimuli generated by: (1) Helicopter landings/takeoffs; (2) noise generated during... minimize acoustic and visual disturbances) as described in NMFS' December 22, 2010 (75 FR 80471) notice of... Activity on Marine Mammals Acoustic and visual stimuli generated by: (1) Helicopter landings/ takeoffs; (2...
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana
2015-09-01
A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).
Iconic-Memory Processing of Unfamiliar Stimuli by Retarded and Nonretarded Individuals.
ERIC Educational Resources Information Center
Hornstein, Henry A.; Mosley, James L.
1979-01-01
The iconic-memory processing of unfamiliar stimuli by 11 mentally retarded males (mean age 22 years) was undertaken employing a visually cued partial-report procedure and a visual masking procedure. (Author/CL)
How bodies and voices interact in early emotion perception.
Jessen, Sarah; Obleser, Jonas; Kotz, Sonja A
2012-01-01
Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta-band oscillations (15-25 Hz) primarily reflecting biological motion perception was modulated 200-400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing.
Accurate or assumed: visual learning in children with ASD.
Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl
2015-10-01
Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to manipulate objects in speech-only and speech + pictures conditions. We found no group differences in visual attention to the stimuli. The GDD and TD groups performed better when pictures were available, whereas the ASD group did not. Performance of children with ASD and GDD was positively correlated with visual attention and receptive language. We found no evidence of a prominent visual learning style in the ASD group.
Wierzchoń, Michał; Wronka, Eligiusz; Paulewicz, Borysław; Szczepanowski, Remigiusz
2016-01-01
The present research investigated metacognitive awareness of emotional stimuli and its psychophysiological correlates. We used a backward masking task presenting participants with fearful or neutral faces. We asked participants for face discrimination and then probed their metacognitive awareness with confidence rating (CR) and post-decision wagering (PDW) scales. We also analysed psychophysiological correlates of awareness with event-related potential (ERP) components: P1, N170, early posterior negativity (EPN), and P3. We have not observed any differences between PDW and CR conditions in the emotion identification task. However, the "aware" ratings were associated with increased accuracy performance. This effect was more pronounced in PDW, especially for fearful faces, suggesting that emotional stimuli awareness may be enhanced by monetary incentives. EEG analysis showed larger N170, EPN and P3 amplitudes in aware compared to unaware trials. It also appeared that both EPN and P3 ERP components were more pronounced in the PDW condition, especially when emotional faces were presented. Taken together, our ERP findings suggest that metacognitive awareness of emotional stimuli depends on the effectiveness of both early and late visual information processing. Our study also indicates that awareness of emotional stimuli can be enhanced by the motivation induced by wagering. PMID:27490816
Wierzchoń, Michał; Wronka, Eligiusz; Paulewicz, Borysław; Szczepanowski, Remigiusz
2016-01-01
The present research investigated metacognitive awareness of emotional stimuli and its psychophysiological correlates. We used a backward masking task presenting participants with fearful or neutral faces. We asked participants for face discrimination and then probed their metacognitive awareness with confidence rating (CR) and post-decision wagering (PDW) scales. We also analysed psychophysiological correlates of awareness with event-related potential (ERP) components: P1, N170, early posterior negativity (EPN), and P3. We have not observed any differences between PDW and CR conditions in the emotion identification task. However, the "aware" ratings were associated with increased accuracy performance. This effect was more pronounced in PDW, especially for fearful faces, suggesting that emotional stimuli awareness may be enhanced by monetary incentives. EEG analysis showed larger N170, EPN and P3 amplitudes in aware compared to unaware trials. It also appeared that both EPN and P3 ERP components were more pronounced in the PDW condition, especially when emotional faces were presented. Taken together, our ERP findings suggest that metacognitive awareness of emotional stimuli depends on the effectiveness of both early and late visual information processing. Our study also indicates that awareness of emotional stimuli can be enhanced by the motivation induced by wagering.
Prediction and uncertainty in human Pavlovian to instrumental transfer.
Trick, Leanne; Hogarth, Lee; Duka, Theodora
2011-05-01
Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively. Participants then went on to separate instrumental training in which a key-press response canceled the aversive noise with a .5 probability on a variable interval schedule. Finally, in the transfer phase, the 3 Pavlovian stimuli were presented in this instrumental schedule and were no longer differentially predictive of the outcome. Observing times and gaze dwell time indexed attention to these stimuli in both training and transfer. Aware participants acquired veridical outcome expectancies in training--that is, A > B > C, and these expectancies persisted into transfer. Most important, the transfer effect accorded with these expectancies, A > B > C. By contrast, observing times accorded with uncertainty--that is, they showed B > A = C during training, and B < A = C in the transfer phase. Dwell time bias supported this association between attention and uncertainty, although these data showed a slightly more complicated pattern. Overall, the study suggests that transfer is linked to outcome prediction and is dissociated from attention to conditioned stimuli, which is linked to outcome uncertainty.
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
Using Morphed Images to Study Visual Detection of Cutaneous Melanoma Symptom Evolution
ERIC Educational Resources Information Center
Dalianis, Elizabeth A.; Critchfield, Thomas S.; Howard, Niki L.; Jordan, J. Scott; Derenne, Adam
2011-01-01
Early detection attenuates otherwise high mortality from the skin cancer melanoma, and although major melanoma symptoms are well defined, little is known about how individuals detect them. Previous research has focused on identifying static stimuli as symptomatic vs. asymptomatic, whereas under natural conditions it is "changes" in skin lesions…
The wide window of face detection.
Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul
2010-08-20
Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.
Visual search and contextual cueing: differential effects in 10-year-old children and adults.
Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M
2011-02-01
The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.
Pan, Yi; Soto, David
2010-07-09
Recent research suggests that visual selection can be automatically biased to those stimuli matching the contents of working memory (WM). However, a complete functional account of the interplay between WM and attention remains to be established. In particular, the boundary conditions of the WM effect on selection are unclear. Here, the authors investigate the influence of the focus of spatial attention (i.e., diffused vs. focused) by assessing the effect of spatial precues on attentional capture by WM. Experiments 1 and 2 showed that relative to a neutral condition without memory-matching stimuli, the presence of a memory distractor can trigger attentional capture despite being entirely irrelevant for the attention task but this happened only when the item was actively maintained in WM and not when it was merely repeated. Experiments 3a, 3b and 3c showed that attentional capture by WM can be modulated by endogenous spatial pre-cueing of the incoming target of selection. The authors conclude that WM-driven capture of visual selection is dependent on the focus of spatial attention. Copyright 2009 Elsevier Ltd. All rights reserved.
Sztarker, Julieta; Tomsic, Daniel
2008-06-01
When confronted with predators, animals are forced to take crucial decisions such as the timing and manner of escape. In the case of the crab Chasmagnathus, cumulative evidence suggests that the escape response to a visual danger stimulus (VDS) can be accounted for by the response of a group of lobula giant (LG) neurons. To further investigate this hypothesis, we examined the relationship between behavioral and neuronal activities within a variety of experimental conditions that affected the level of escape. The intensity of the escape response to VDS was influenced by seasonal variations, changes in stimulus features, and whether the crab perceived stimuli monocularly or binocularly. These experimental conditions consistently affected the response of LG neurons in a way that closely matched the effects observed at the behavioral level. In other words, the intensity of the stimulus-elicited spike activity of LG neurons faithfully reflected the intensity of the escape response. These results support the idea that the LG neurons from the lobula of crabs are deeply involved in the decision for escaping from VDS.
Raymond, J L; Lisberger, S G
1996-12-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
NASA Technical Reports Server (NTRS)
Raymond, J. L.; Lisberger, S. G.
1996-01-01
We characterized the dependence of motor learning in the monkey vestibulo-ocular reflex (VOR) on the duration, frequency, and relative timing of the visual and vestibular stimuli used to induce learning. The amplitude of the VOR was decreased or increased through training with paired head and visual stimulus motion in the same or opposite directions, respectively. For training stimuli that consisted of simultaneous pulses of head and target velocity 80-1000 msec in duration, brief stimuli caused small changes in the amplitude of the VOR, whereas long stimuli caused larger changes in amplitude as well as changes in the dynamics of the reflex. When the relative timing of the visual and vestibular stimuli was varied, brief image motion paired with the beginning of a longer vestibular stimulus caused changes in the amplitude of the reflex alone, but the same image motion paired with a later time in the vestibular stimulus caused changes in the dynamics as well as the amplitude of the VOR. For training stimuli that consisted of sinusoidal head and visual stimulus motion, low-frequency training stimuli induced frequency-selective changes in the VOR, as reported previously, whereas high-frequency training stimuli induced changes in the amplitude of the VOR that were more similar across test frequency. The results suggest that there are at least two distinguishable components of motor learning in the VOR. One component is induced by short-duration or high-frequency stimuli and involves changes in only the amplitude of the reflex. A second component is induced by long-duration or low-frequency stimuli and involves changes in the amplitude and dynamics of the VOR.
Multimodal emotion perception after anterior temporal lobectomy (ATL)
Milesi, Valérie; Cekic, Sezen; Péron, Julie; Frühholz, Sascha; Cristinzio, Chiara; Seeck, Margitta; Grandjean, Didier
2014-01-01
In the context of emotion information processing, several studies have demonstrated the involvement of the amygdala in emotion perception, for unimodal and multimodal stimuli. However, it seems that not only the amygdala, but several regions around it, may also play a major role in multimodal emotional integration. In order to investigate the contribution of these regions to multimodal emotion perception, five patients who had undergone unilateral anterior temporal lobe resection were exposed to both unimodal (vocal or visual) and audiovisual emotional and neutral stimuli. In a classic paradigm, participants were asked to rate the emotional intensity of angry, fearful, joyful, and neutral stimuli on visual analog scales. Compared with matched controls, patients exhibited impaired categorization of joyful expressions, whether the stimuli were auditory, visual, or audiovisual. Patients confused joyful faces with neutral faces, and joyful prosody with surprise. In the case of fear, unlike matched controls, patients provided lower intensity ratings for visual stimuli than for vocal and audiovisual ones. Fearful faces were frequently confused with surprised ones. When we controlled for lesion size, we no longer observed any overall difference between patients and controls in their ratings of emotional intensity on the target scales. Lesion size had the greatest effect on intensity perceptions and accuracy in the visual modality, irrespective of the type of emotion. These new findings suggest that a damaged amygdala, or a disrupted bundle between the amygdala and the ventral part of the occipital lobe, has a greater impact on emotion perception in the visual modality than it does in either the vocal or audiovisual one. We can surmise that patients are able to use the auditory information contained in multimodal stimuli to compensate for difficulty processing visually conveyed emotion. PMID:24839437
Vegetarianism and food perception. Selective visual attention to meat pictures.
Stockburger, Jessica; Renner, Britta; Weike, Almut I; Hamm, Alfons O; Schupp, Harald T
2009-04-01
Vegetarianism provides a model system to examine the impact of negative affect towards meat, based on ideational reasoning. It was hypothesized that meat stimuli are efficient attention catchers in vegetarians. Event-related brain potential recordings served to index selective attention processes at the level of initial stimulus perception. Consistent with the hypothesis, late positive potentials to meat pictures were enlarged in vegetarians compared to omnivores. This effect was specific for meat pictures and obtained during passive viewing and an explicit attention task condition. These findings demonstrate the attention capture of food stimuli, deriving affective salience from ideational reasoning and symbolic meaning.
Van De Poll, Matthew N; Zajaczkowski, Esmi L; Taylor, Gavin J; Srinivasan, Mandyam V; van Swinderen, Bruno
2015-11-01
Closed-loop paradigms provide an effective approach for studying visual choice behaviour and attention in small animals. Different flying and walking paradigms have been developed to investigate behavioural and neuronal responses to competing stimuli in insects such as bees and flies. However, the variety of stimulus choices that can be presented over one experiment is often limited. Current choice paradigms are mostly constrained as single binary choice scenarios that are influenced by the linear structure of classical conditioning paradigms. Here, we present a novel behavioural choice paradigm that allows animals to explore a closed geometry of interconnected binary choices by repeatedly selecting among competing objects, thereby revealing stimulus preferences in an historical context. We used our novel paradigm to investigate visual flicker preferences in honeybees (Apis mellifera) and found significant preferences for 20-25 Hz flicker and avoidance of higher (50-100 Hz) and lower (2-4 Hz) flicker frequencies. Similar results were found when bees were presented with three simultaneous choices instead of two, and when they were given the chance to select previously rejected choices. Our results show that honeybees can discriminate among different flicker frequencies and that their visual preferences are persistent even under different experimental conditions. Interestingly, avoided stimuli were more attractive if they were novel, suggesting that novelty salience can override innate preferences. Our recursive virtual reality environment provides a new approach to studying visual discrimination and choice behaviour in animals. © 2015. Published by The Company of Biologists Ltd.
Carnaghi, Andrea; Mitrovic, Aleksandra; Leder, Helmut; Fantoni, Carlo; Silani, Giorgia
2018-01-01
A controversial hypothesis, named the Sexualized Body Inversion Hypothesis (SBIH), claims similar visual processing of sexually objectified women (i.e., with a focus on the sexual body parts) and inanimate objects as indicated by an absence of the inversion effect for both type of stimuli. The current study aims at shedding light into the mechanisms behind the SBIH in a series of 4 experiments. Using a modified version of Bernard et al.´s (2012) visual-matching task, first we tested the core assumption of the SBIH, namely that a similar processing style occurs for sexualized human bodies and objects. In Experiments 1 and 2 a non-sexualized (personalized) condition plus two object-control conditions (mannequins, and houses) were included in the experimental design. Results showed an inversion effect for images of personalized women and mannequins, but not for sexualized women and houses. Second, we explored whether this effect was driven by differences in stimulus asymmetry, by testing the mediating and moderating role of this visual feature. In Experiment 3, we provided the first evidence that not only the sexual attributes of the images but also additional perceptual features of the stimuli, such as their asymmetry, played a moderating role in shaping the inversion effect. Lastly, we investigated the strategy adopted in the visual-matching task by tracking eye movements of the participants. Results of Experiment 4 suggest an association between a specific pattern of visual exploration of the images and the presence of the inversion effect. Findings are discussed with respect to the literature on sexual objectification. PMID:29621249
NASA Technical Reports Server (NTRS)
Carpenter-Smith, Theodore R.; Futamura, Robert G.; Parker, Donald E.
1995-01-01
The present study focused on the development of a procedure to assess perceived self-motion induced by visual surround motion - vection. Using an apparatus that permitted independent control of visual and inertial stimuli, prone observers were translated along their head x-axis (fore/aft). The observers' task was to report the direction of self-motion during passive forward and backward translations of their bodies coupled with exposure to various visual surround conditions. The proportion of 'forward' responses was used to calculate each observer's point of subjective equality (PSE) for each surround condition. The results showed that the moving visual stimulus produced a significant shift in the PSE when data from the moving surround condition were compared with the stationary surround and no-vision condition. Further, the results indicated that vection increased monotonically with surround velocities between 4 and 40/s. It was concluded that linear vection can be measured in terms of changes in the amplitude of whole-body inertial acceleration required to elicit equivalent numbers of 'forward' and 'backward' self-motion reports.
Aging-related changes in auditory and visual integration measured with MEG
Stephen, Julia M.; Knoefel, Janice E.; Adair, John; Hart, Blaine; Aine, Cheryl J.
2010-01-01
As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20 ms) and auditory systems (< 5 ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (~100 ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects in the elderly, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. PMID:20713130
Aging-related changes in auditory and visual integration measured with MEG.
Stephen, Julia M; Knoefel, Janice E; Adair, John; Hart, Blaine; Aine, Cheryl J
2010-10-22
As noted in the aging literature, processing delays often occur in the central nervous system with increasing age, which is often attributable in part to demyelination. In addition, differential slowing between sensory systems has been shown to be most discrepant between visual (up to 20ms) and auditory systems (<5ms). Therefore, we used MEG to measure the multisensory integration response in auditory association cortex in young and elderly participants to better understand the effects of aging on multisensory integration abilities. Results show a main effect for reaction times (RTs); the mean RTs of the elderly were significantly slower than the young. In addition, in the young we found significant facilitation of RTs to the multisensory stimuli relative to both unisensory stimuli, when comparing the cumulative distribution functions, which was not evident for the elderly. We also identified a significant interaction between age and condition in the superior temporal gyrus. In particular, the elderly had larger amplitude responses (∼100ms) to auditory stimuli relative to the young when auditory stimuli alone were presented, whereas the amplitude of responses to the multisensory stimuli was reduced in the elderly, relative to the young. This suppressed cortical multisensory integration response in the elderly, which corresponded with slower RTs and reduced RT facilitation effects, has not been reported previously and may be related to poor cortical integration based on timing changes in unisensory processing in the elderly. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study
Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.
2012-01-01
Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014
Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292
Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study
NASA Astrophysics Data System (ADS)
Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.
2015-08-01
Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.
Lateral eye-movement responses to visual stimuli.
Wilbur, M P; Roberts-Wilbur, J
1985-08-01
The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.
Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R
2017-07-01
Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2006-05-01
Simple visual-reaction times (VRT) were measured for a variety of stimuli selected along red-green (L-M axis) and blue-yellow [S-(L+M) axis] directions in the isoluminant plane under different adaptation stimuli. Data were plotted in terms of the RMS cone contrast in contrast-threshold units. For each opponent system, a modified Piéron function was fitted in each experimental configuration and on all adaptation stimuli. A single function did not account for all the data, confirming the existence of separate postreceptoral adaptation mechanisms in each opponent system under suprathreshold conditions. The analysis of the VRT-hazard functions suggested that both color-opponent mechanisms present a well-defined, transient-sustained structure at marked suprathreshold conditions. The influence of signal polarity and chromatic adaptation on each color axis proves the existence of asymmetries in the integrated hazard functions, suggesting separate detection mechanisms for each pole (red, green, blue, and yellow detectors).
Visual cognition in disorders of consciousness: from V1 to top-down attention.
Monti, Martin M; Pickard, John D; Owen, Adrian M
2013-06-01
What is it like to be at the lower boundaries of consciousness? Disorders of consciousness such as coma, the vegetative state, and the minimally conscious state are among the most mysterious and least understood conditions of the human brain. Particularly complicated is the assessment of residual cognitive functioning and awareness for diagnostic, rehabilitative, legal, and ethical purposes. In this article, we present a novel functional magnetic resonance imaging exploration of visual cognition in a patient with a severe disorder of consciousness. This battery of tests, first developed in healthy volunteers, assesses increasingly complex transformations of visual information along a known caudal to rostral gradient from occipital to temporal cortex. In the first five levels, the battery assesses (passive) processing of light, color, motion, coherent shapes, and object categories (i.e., faces, houses). At the final level, the battery assesses the ability to voluntarily deploy visual attention in order to focus on one of two competing stimuli. In the patient, this approach revealed appropriate brain activations, undistinguishable from those seen in healthy and aware volunteers. In addition, the ability of the patient to focus one of two competing stimuli, and switch between them on command, also suggests that he retained the ability to access, to some degree, his own visual representations. Copyright © 2012 Wiley Periodicals, Inc.
Constantinidou, Fofi; Evripidou, Christiana
2012-01-01
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.
Miskovic, Vladimir; Keil, Andreas
2015-01-01
The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582
Visual stimuli and written production of deaf signers.
Jacinto, Laís Alves; Ribeiro, Karen Barros; Soares, Aparecido José Couto; Cárnio, Maria Silvia
2012-01-01
To verify the interference of visual stimuli in written production of deaf signers with no complaints regarding reading and writing. The research group consisted of 12 students with education between the 4th and 5th grade of elementary school, with severe or profound sensorineural hearing loss, users of LIBRAS and with alphabetical writing level. The evaluation was performed with pictures in a logical sequence and an action picture. The analysis used the communicative competence criteria. There were no differences in the writing production of the subjects for both stimuli. In all texts there was no title and punctuation, verbs were in the infinitive mode, there was lack of cohesive links and inclusion of created words. The different visual stimuli did not affect the production of texts.
Do You "See'" What I "See"? Differentiation of Visual Action Words
ERIC Educational Resources Information Center
Dickinson, Joël; Cirelli, Laura; Szeligo, Frank
2014-01-01
Dickinson and Szeligo ("Can J Exp Psychol" 62(4):211--222, 2008) found that processing time for simple visual stimuli was affected by the visual action participants had been instructed to perform on these stimuli (e.g., see, distinguish). It was concluded that these effects reflected the differences in the durations of these various…
Neuronal Response Gain Enhancement prior to Microsaccades.
Chen, Chih-Yang; Ignashchenkova, Alla; Thier, Peter; Hafed, Ziad M
2015-08-17
Neuronal response gain enhancement is a classic signature of the allocation of covert visual attention without eye movements. However, microsaccades continuously occur during gaze fixation. Because these tiny eye movements are preceded by motor preparatory signals well before they are triggered, it may be the case that a corollary of such signals may cause enhancement, even without attentional cueing. In six different macaque monkeys and two different brain areas previously implicated in covert visual attention (superior colliculus and frontal eye fields), we show neuronal response gain enhancement for peripheral stimuli appearing immediately before microsaccades. This enhancement occurs both during simple fixation with behaviorally irrelevant peripheral stimuli and when the stimuli are relevant for the subsequent allocation of covert visual attention. Moreover, this enhancement occurs in both purely visual neurons and visual-motor neurons, and it is replaced by suppression for stimuli appearing immediately after microsaccades. Our results suggest that there may be an obligatory link between microsaccade occurrence and peripheral selective processing, even though microsaccades can be orders of magnitude smaller than the eccentricities of peripheral stimuli. Because microsaccades occur in a repetitive manner during fixation, and because these eye movements reset neurophysiological rhythms every time they occur, our results highlight a possible mechanism through which oculomotor events may aid periodic sampling of the visual environment for the benefit of perception, even when gaze is prevented from overtly shifting. One functional consequence of such periodic sampling could be the magnification of rhythmic fluctuations of peripheral covert visual attention. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Heightened attentional capture by visual food stimuli in anorexia nervosa.
Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J
2017-08-01
The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Positive mood broadens visual attention to positive stimuli.
Wadlinger, Heather A; Isaacowitz, Derek M
2006-03-01
In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.
MacDonald, Stuart W S; Hultsch, David F; Bunce, David
2006-07-01
Intraindividual performance variability, or inconsistency, has been shown to predict neurological status, physiological functioning, and age differences and declines in cognition. However, potential moderating factors of inconsistency are not well understood. The present investigation examined whether inconsistency in vigilance response latencies varied as a function of time-on-task and task demands by degrading visual stimuli in three separate conditions (10%, 20%, and 30%). Participants were 24 younger women aged 21 to 30 years (M = 24.04, SD = 2.51) and 23 older women aged 61 to 83 years (M = 68.70, SD = 6.38). A measure of within-person inconsistency, the intraindividual standard deviation (ISD), was computed for each individual across reaction time (RT) trials (3 blocks of 45 event trials) for each condition of the vigilance task. Greater inconsistency was observed with increasing stimulus degradation and age, even after controlling for group differences in mean RTs and physical condition. Further, older adults were more inconsistent than younger adults for similar degradation conditions, with ISD scores for younger adults in the 30% condition approximating estimates observed for older adults in the 10% condition. Finally, a measure of perceptual sensitivity shared increasing negative associations with ISDs, with this association further modulated as a function of age but to a lesser degree by degradation condition. Results support current hypotheses suggesting that inconsistency serves as a marker of neurological integrity and are discussed in terms of potential underlying mechanisms.
Representation and disconnection in imaginal neglect.
Rode, G; Cotton, F; Revol, P; Jacquin-Courtois, S; Rossetti, Y; Bartolomeo, P
2010-08-01
Patients with neglect failure to detect, orient, or respond to stimuli from a spatially confined region, usually on their left side. Often, the presence of perceptual input increases left omissions, while sensory deprivation decreases them, possibly by removing attention-catching right-sided stimuli (Bartolomeo, 2007). However, such an influence of visual deprivation on representational neglect was not observed in patients while they were imagining a map of France (Rode et al., 2007). Therefore, these patients with imaginal neglect either failed to generate the left side of mental images (Bisiach & Luzzatti, 1978), or suffered from a co-occurrence of deficits in automatic (bottom-up) and voluntary (top-down) orienting of attention. However, in Rode et al.'s experiment visual input was not directly relevant to the task; moreover, distraction from visual input might primarily manifest itself when representation guides somatomotor actions, beyond those involved in the generation and mental exploration of an internal map (Thomas, 1999). To explore these possibilities, we asked a patient with right hemisphere damage, R.D., to explore visual and imagined versions of a map of France in three conditions: (1) 'imagine the map in your mind' (imaginal); (2) 'describe a real map' (visual); and (3) 'list the names of French towns' (propositional). For the imaginal and visual conditions, verbal and manual pointing responses were collected; the task was also given before and after mental rotation of the map by 180 degrees . R.D. mentioned more towns on the right side of the map in the imaginal and visual conditions, but showed no representational deficit in the propositional condition. The rightward inner exploration bias in the imaginal and visual conditions was similar in magnitude and was not influenced by mental rotation or response type (verbal responses or manual pointing to locations on a map), thus suggesting that the representational deficit was robust and independent of perceptual input in R.D. Structural and diffusion MRI demonstrated damage to several white matter tracts in the right hemisphere and to the splenium of corpus callosum. A second right-brain damaged patient (P.P.), who showed signs of visual but not imaginal neglect, had damage to the same intra-hemispheric tracts, but the callosal connections were spared. Imaginal neglect in R.D. may result from fronto-parietal dysfunction impairing orientation towards left-sided items and posterior callosal disconnection preventing the symmetrical processing of spatial information from long-term memory. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.
Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor
2015-04-01
Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.
Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi
2015-01-01
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei
2015-02-01
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Rhythmic synchronization tapping to an audio–visual metronome in budgerigars
Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa
2011-01-01
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio–visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans. PMID:22355637
Rhythmic synchronization tapping to an audio-visual metronome in budgerigars.
Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa
2011-01-01
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.
The functional consequences of social distraction: Attention and memory for complex scenes.
Doherty, Brianna Ruth; Patai, Eva Zita; Duta, Mihaela; Nobre, Anna Christina; Scerif, Gaia
2017-01-01
Cognitive scientists have long proposed that social stimuli attract visual attention even when task irrelevant, but the consequences of this privileged status for memory are unknown. To address this, we combined computational approaches, eye-tracking methodology, and individual-differences measures. Participants searched for targets in scenes containing social or non-social distractors equated for low-level visual salience. Subsequent memory precision for target locations was tested. Individual differences in autistic traits and social anxiety were also measured. Eye-tracking revealed significantly more attentional capture to social compared to non-social distractors. Critically, memory precision for target locations was poorer for social scenes. This effect was moderated by social anxiety, with anxious individuals remembering target locations better under conditions of social distraction. These findings shed further light onto the privileged attentional status of social stimuli and its functional consequences on memory across individuals. Copyright © 2016. Published by Elsevier B.V.
The role of prestimulus activity in visual extinction☆
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-01-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398
The role of prestimulus activity in visual extinction.
Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl
2013-07-01
Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Self-organization of head-centered visual responses under ecological training conditions.
Mender, Bedeho M W; Stringer, Simon M
2014-01-01
We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.
The role of visual and mechanosensory cues in structuring forward flight in Drosophila melanogaster.
Budick, Seth A; Reiser, Michael B; Dickinson, Michael H
2007-12-01
It has long been known that many flying insects use visual cues to orient with respect to the wind and to control their groundspeed in the face of varying wind conditions. Much less explored has been the role of mechanosensory cues in orienting insects relative to the ambient air. Here we show that Drosophila melanogaster, magnetically tethered so as to be able to rotate about their yaw axis, are able to detect and orient into a wind, as would be experienced during forward flight. Further, this behavior is velocity dependent and is likely subserved, at least in part, by the Johnston's organs, chordotonal organs in the antennae also involved in near-field sound detection. These wind-mediated responses may help to explain how flies are able to fly forward despite visual responses that might otherwise inhibit this behavior. Expanding visual stimuli, such as are encountered during forward flight, are the most potent aversive visual cues known for D. melanogaster flying in a tethered paradigm. Accordingly, tethered flies strongly orient towards a focus of contraction, a problematic situation for any animal attempting to fly forward. We show in this study that wind stimuli, transduced via mechanosensory means, can compensate for the aversion to visual expansion and thus may help to explain how these animals are indeed able to maintain forward flight.
Sensory organization for balance: specific deficits in Alzheimer's but not in Parkinson's disease.
Chong, R K; Horak, F B; Frank, J; Kaye, J
1999-03-01
The cause of frequent falling in patients with dementia of the Alzheimer type (AD) is not well understood. Distraction from incongruent visual stimuli may be an important factor as suggested by their poor performance in tests of shifting visual attention in other studies. The purpose of this study was to determine whether AD patients have difficulty maintaining upright balance under absent and/or incongruent visual and other sensory conditions compared to nondemented healthy elderly persons and individuals with Parkinson's disease (PD). Seventeen healthy older adults, 15 medicated PD subjects, and 11 AD subjects underwent the Sensory Organization Test protocol. The incidence of loss of balance ("falls"), and the peak-to-peak amplitude of body center of mass sway during stance in the six sensory conditions were used to infer the ability to use visual, somatosensory, and vestibular signals when they provided useful information for balance, and to suppress them when they were incongruent as an orientation reference. Vestibular reflex tests were conducted to ensure normal vestibular function in the subjects. AD subjects had normal vestibular function but had trouble using it in condition 6, where they had to concurrently suppress both incongruent visual and somatosensory inputs. All 11 AD subjects fell in the first trial of this condition. With repeated trials, only three AD subjects were able to stay balanced. AD subjects were able to keep their balance when only somatosensory input was incongruent. In this condition, all AD subjects were able to maintain balance whereas some falls occurred in the other groups. In all conditions, when AD subjects did not fall, they were able to control as large a sway as the healthy controls, except when standing with eyes closed in condition 2: AD subjects did not increase their sway whereas the other groups did. In the PD group, the total fall incidence was similar to the AD group, but the distribution was generalized across more sensory conditions. PD subjects were also able to improve with repeated trials in condition 6. Patients with dementia of the Alzheimer type have decreased ability to suppress incongruent visual stimuli when trying to maintain balance. However, they did not seem to be dependent on vision for balance because they did not increase their sway when vision was absent. Parkinsonian patients have a more general balance control problem in the sensory organization test, possibly related to difficulty changing set.
How does cognitive load influence speech perception? An encoding hypothesis.
Mitterer, Holger; Mattys, Sven L
2017-01-01
Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.
Neuronal Representation of Ultraviolet Visual Stimuli in Mouse Primary Visual Cortex
Tan, Zhongchao; Sun, Wenzhi; Chen, Tsai-Wen; Kim, Douglas; Ji, Na
2015-01-01
The mouse has become an important model for understanding the neural basis of visual perception. Although it has long been known that mouse lens transmits ultraviolet (UV) light and mouse opsins have absorption in the UV band, little is known about how UV visual information is processed in the mouse brain. Using a custom UV stimulation system and in vivo calcium imaging, we characterized the feature selectivity of layer 2/3 neurons in mouse primary visual cortex (V1). In adult mice, a comparable percentage of the neuronal population responds to UV and visible stimuli, with similar pattern selectivity and receptive field properties. In young mice, the orientation selectivity for UV stimuli increased steadily during development, but not direction selectivity. Our results suggest that, by expanding the spectral window through which the mouse can acquire visual information, UV sensitivity provides an important component for mouse vision. PMID:26219604
Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.
Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin
2016-01-01
Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
Simultaneous face and voice processing in schizophrenia.
Liu, Taosheng; Pinheiro, Ana P; Zhao, Zhongxin; Nestor, Paul G; McCarley, Robert W; Niznikiewicz, Margaret
2016-05-15
While several studies have consistently demonstrated abnormalities in the unisensory processing of face and voice in schizophrenia (SZ), the extent of abnormalities in the simultaneous processing of both types of information remains unclear. To address this issue, we used event-related potentials (ERP) methodology to probe the multisensory integration of face and non-semantic sounds in schizophrenia. EEG was recorded from 18 schizophrenia patients and 19 healthy control (HC) subjects in three conditions: neutral faces (visual condition-VIS); neutral non-semantic sounds (auditory condition-AUD); neutral faces presented simultaneously with neutral non-semantic sounds (audiovisual condition-AUDVIS). When compared with HC, the schizophrenia group showed less negative N170 to both face and face-voice stimuli; later P270 peak latency in the multimodal condition of face-voice relative to unimodal condition of face (the reverse was true in HC); reduced P400 amplitude and earlier P400 peak latency in the face but not in the voice-face condition. Thus, the analysis of ERP components suggests that deficits in the encoding of facial information extend to multimodal face-voice stimuli and that delays exist in feature extraction from multimodal face-voice stimuli in schizophrenia. In contrast, categorization processes seem to benefit from the presentation of simultaneous face-voice information. Timepoint by timepoint tests of multimodal integration did not suggest impairment in the initial stages of processing in schizophrenia. Published by Elsevier B.V.
Retinotopy and attention to the face and house images in the human visual cortex.
Wang, Bin; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Wu, Jinglong
2016-06-01
Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.
Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F
2017-07-25
Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Blur adaptation: contrast sensitivity changes and stimulus extent.
Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter; Lundström, Linda
2015-05-01
A prolonged exposure to foveal defocus is well known to affect the visual functions in the fovea. However, the effects of peripheral blur adaptation on foveal vision, or vice versa, are still unclear. In this study, we therefore examined the changes in contrast sensitivity function from baseline, following blur adaptation to small as well as laterally extended stimuli in four subjects. The small field stimulus (7.5° visual field) was a 30min video of forest scenery projected on a screen and the large field stimulus consisted of 7-tiles of the 7.5° stimulus stacked horizontally. Both stimuli were used for adaptation with optical blur (+2.00D trial lens) as well as for clear control conditions. After small field blur adaptation foveal contrast sensitivity improved in the mid spatial frequency region. However, these changes neither spread to the periphery nor occurred for the large field blur adaptation. To conclude, visual performance after adaptation is dependent on the lateral extent of the adaptation stimulus. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis
ERIC Educational Resources Information Center
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
2010-01-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
Explaining the Colavita visual dominance effect.
Spence, Charles
2009-01-01
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.
Dynamics of normalization underlying masking in human visual cortex.
Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M
2012-02-22
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex
Singer, Wolf; Maass, Wolfgang
2009-01-01
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205
Black–white asymmetry in visual perception
Lu, Zhong-Lin; Sperling, George
2012-01-01
With eleven different types of stimuli that exercise a wide gamut of spatial and temporal visual processes, negative perturbations from mean luminance are found to be typically 25% more effective visually than positive perturbations of the same magnitude (range 8–67%). In Experiment 12, the magnitude of the black–white asymmetry is shown to be a saturating function of stimulus contrast. Experiment 13 shows black–white asymmetry primarily involves a nonlinearity in the visual representation of decrements. Black–white asymmetry in early visual processing produces even-harmonic distortion frequencies in all ordinary stimuli and in illusions such as the perceived asymmetry of optically perfect sine wave gratings. In stimuli intended to stimulate exclusively second-order processing in which motion or shape are defined not by luminance differences but by differences in texture contrast, the black–white asymmetry typically generates artifactual luminance (first-order) motion and shape components. Because black–white asymmetry pervades psychophysical and neurophysiological procedures that utilize spatial or temporal variations of luminance, it frequently needs to be considered in the design and evaluation of experiments that involve visual stimuli. Simple procedures to compensate for black–white asymmetry are proposed. PMID:22984221
Influence of auditory and audiovisual stimuli on the right-left prevalence effect.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
2014-01-01
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
NASA Astrophysics Data System (ADS)
Ramirez, Joshua; Mann, Virginia
2005-08-01
Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.
Chuen, Lorraine; Schutz, Michael
2016-07-01
An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities.
Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA
2015-01-01
Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184
Orienting Attention in Audition and between Audition and Vision: Young and Elderly Subjects.
ERIC Educational Resources Information Center
Robin, Donald A.; Rizzo, Matthew
1992-01-01
Thirty young and 10 elderly adults were assessed on orienting auditory attention, in a mixed-modal condition in which stimuli were either auditory or visual. Findings suggest that the mechanisms involved in orienting attention operate in audition and that individuals may allocate their processing resources among multiple sensory pools. (Author/JDD)
Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B
2018-07-01
Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.
Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam
2011-08-03
Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.
Dong, Guangheng; Yang, Lizhu; Shen, Yue
2009-08-21
The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.
NASA Astrophysics Data System (ADS)
Pardo, P. J.; Pérez, A. L.; Suero, M. I.
2004-01-01
An old fluorescence spectrophotometer was recycled to make a three-channel colorimeter. The various modifications involved in its design and implementation are described. An optical system was added that allows the fusion of two visual stimuli coming from the two monochromators of the spectrofluorimeter. Each of these stimuli has a wavelength and bandwidth control, and a third visual stimulus may be taken from a monochromator, a cathode ray tube, a thin film transistor screen, or any other light source. This freedom in the choice of source of the third chromatic channel, together with the characteristics of the visual stimuli from the spectrofluorimeter, give this design a great versatility in its application to novel visual experiments on color vision.
Verhoef, Bram-Ernst; Bohon, Kaitlin S.
2015-01-01
Binocular disparity is a powerful depth cue for object perception. The computations for object vision culminate in inferior temporal cortex (IT), but the functional organization for disparity in IT is unknown. Here we addressed this question by measuring fMRI responses in alert monkeys to stimuli that appeared in front of (near), behind (far), or at the fixation plane. We discovered three regions that showed preferential responses for near and far stimuli, relative to zero-disparity stimuli at the fixation plane. These “near/far” disparity-biased regions were located within dorsal IT, as predicted by microelectrode studies, and on the posterior inferotemporal gyrus. In a second analysis, we instead compared responses to near stimuli with responses to far stimuli and discovered a separate network of “near” disparity-biased regions that extended along the crest of the superior temporal sulcus. We also measured in the same animals fMRI responses to faces, scenes, color, and checkerboard annuli at different visual field eccentricities. Disparity-biased regions defined in either analysis did not show a color bias, suggesting that disparity and color contribute to different computations within IT. Scene-biased regions responded preferentially to near and far stimuli (compared with stimuli without disparity) and had a peripheral visual field bias, whereas face patches had a marked near bias and a central visual field bias. These results support the idea that IT is organized by a coarse eccentricity map, and show that disparity likely contributes to computations associated with both central (face processing) and peripheral (scene processing) visual field biases, but likely does not contribute much to computations within IT that are implicated in processing color. PMID:25926470
The role of early visual cortex in visual short-term memory and visual attention.
Offen, Shani; Schluppeck, Denis; Heeger, David J
2009-06-01
We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.
Binocular eye movement control and motion perception: what is being tracked?
van der Steen, Johannes; Dits, Joyce
2012-10-19
We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.
Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
van der Steen, Johannes; Dits, Joyce
2012-01-01
Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286
A Neural Network Approach to fMRI Binocular Visual Rivalry Task Analysis
Bertolino, Nicola; Ferraro, Stefania; Nigri, Anna; Bruzzone, Maria Grazia; Ghielmetti, Francesco; Leonardi, Matilde; Agostino Parati, Eugenio; Grazia Bruzzone, Maria; Franceschetti, Silvana; Caldiroli, Dario; Sattin, Davide; Giovannetti, Ambra; Pagani, Marco; Covelli, Venusia; Ciaraffa, Francesca; Vela Gomez, Jesus; Reggiori, Barbara; Ferraro, Stefania; Nigri, Anna; D'Incerti, Ludovico; Minati, Ludovico; Andronache, Adrian; Rosazza, Cristina; Fazio, Patrik; Rossi, Davide; Varotto, Giulia; Panzica, Ferruccio; Benti, Riccardo; Marotta, Giorgio; Molteni, Franco
2014-01-01
The purpose of this study was to investigate whether artificial neural networks (ANN) are able to decode participants’ conscious experience perception from brain activity alone, using complex and ecological stimuli. To reach the aim we conducted pattern recognition data analysis on fMRI data acquired during the execution of a binocular visual rivalry paradigm (BR). Twelve healthy participants were submitted to fMRI during the execution of a binocular non-rivalry (BNR) and a BR paradigm in which two classes of stimuli (faces and houses) were presented. During the binocular rivalry paradigm, behavioral responses related to the switching between consciously perceived stimuli were also collected. First, we used the BNR paradigm as a functional localizer to identify the brain areas involved the processing of the stimuli. Second, we trained the ANN on the BNR fMRI data restricted to these regions of interest. Third, we applied the trained ANN to the BR data as a ‘brain reading’ tool to discriminate the pattern of neural activity between the two stimuli. Fourth, we verified the consistency of the ANN outputs with the collected behavioral indicators of which stimulus was consciously perceived by the participants. Our main results showed that the trained ANN was able to generalize across the two different tasks (i.e. BNR and BR) and to identify with high accuracy the cognitive state of the participants (i.e. which stimulus was consciously perceived) during the BR condition. The behavioral response, employed as control parameter, was compared with the network output and a statistically significant percentage of correspondences (p-value <0.05) were obtained for all subjects. In conclusion the present study provides a method based on multivariate pattern analysis to investigate the neural basis of visual consciousness during the BR phenomenon when behavioral indicators lack or are inconsistent, like in disorders of consciousness or sedated patients. PMID:25121595
Song, Inkyung; Keil, Andreas
2015-01-01
Neutral cues, after being reliably paired with noxious events, prompt defensive engagement and amplified sensory responses. To examine the neurophysiology underlying these adaptive changes, we quantified the contrast-response function of visual cortical population activity during differential aversive conditioning. Steady-state visual evoked potentials (ssVEPs) were recorded while participants discriminated the orientation of rapidly flickering grating stimuli. During each trial, luminance contrast of the gratings was slowly increased and then decreased. Right-tilted gratings (CS+) were paired with loud white noise but left-tilted gratings (CS−) were not. The contrast-following waveform envelope of ssVEPs showed selective amplification of the CS+ only during the high-contrast stage of the viewing epoch. Findings support the notion that motivational relevance, learned in a time frame of minutes, affects vision through a response gain mechanism. PMID:24981277
Mullen, Kathy T; Chang, Dorita H F; Hess, Robert F
2015-12-01
There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High-contrast red-green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross-adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross-adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Primary visual response (M100) delays in adolescents with FASD as measured with MEG.
Coffman, Brian A; Kodituwakku, Piyadasa; Kodituwakku, Elizabeth L; Romero, Lucinda; Sharadamma, Nirupama Muniswamy; Stone, David; Stephen, Julia M
2013-11-01
Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. Copyright © 2012 Wiley Periodicals, Inc.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
[Is it possible to train Achatina fulica using visual stimulation?].
Baĭkova, I B; Zhukov, V V
2001-01-01
The conditioned behavior to visual stimuli was obtained in Achatina fulica mollusk on the basis of its negative phototaxis. Directional moving of snails toward black cards was accompanied by the negative unconditioned stimulation (electric current). Learning was expressed in a statistically significant decrease in locomotor activity of animals and decrease in the rate of preference of sections with black cards. Learning developed within two daily training sessions with 30 trials in each of them. Learning traces were observed as defensive behavior at least during a month after reinforcement elimination.
[Ventriloquism and audio-visual integration of voice and face].
Yokosawa, Kazuhiko; Kanaya, Shoko
2012-07-01
Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.
Heim, Stefan; von Tongeln, Franziska; Hillen, Rebekka; Horbach, Josefine; Radach, Ralph; Günther, Thomas
2018-06-19
The Landolt paradigm is a visual scanning task intended to evoke reading-like eye-movements in the absence of orthographic or lexical information, thus allowing the dissociation of (sub-) lexical vs. visual processing. To that end, all letters in real word sentences are exchanged for closed Landolt rings, with 0, 1, or 2 open Landolt rings as targets in each Landolt sentence. A preliminary fMRI block-design study (Hillen et al. in Front Hum Neurosci 7:1-14, 2013) demonstrated that the Landolt paradigm has a special neural signature, recruiting the right IPS and SPL as part of the endogenous attention network. However, in that analysis, the brain responses to target detection could not be separated from those involved in processing Landolt stimuli without targets. The present study presents two fMRI experiments testing the question whether targets or the Landolt stimuli per se, led to the right IPS/SPL activation. Experiment 1 was an event-related re-analysis of the Hillen et al. (Front Hum Neurosci 7:1-14, 2013) data. Experiment 2 was a replication study with a new sample and identical procedures. In both experiments, the right IPS/SPL were recruited in the Landolt condition as compared to orthographic stimuli even in the absence of any target in the stimulus, indicating that the properties of the Landolt task itself trigger this right parietal activation. These findings are discussed against the background of behavioural and neuroimaging studies of healthy reading as well as developmental and acquired dyslexia. Consequently, this neuroimaging evidence might encourage the use of the Landolt paradigm also in the context of examining reading disorders, as it taps into the orientation of visual attention during reading-like scanning of stimuli without interfering sub-lexical information.