Sample records for visual stimuli presented

  1. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  2. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  3. Auditory emotional cues enhance visual perception.

    PubMed

    Zeelenberg, René; Bocanegra, Bruno R

    2010-04-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.

  4. A sLORETA study for gaze-independent BCI speller.

    PubMed

    Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming

    2017-07-01

    EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.

  5. Temporal Influence on Awareness

    DTIC Science & Technology

    1995-12-01

    43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and

  6. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  8. Does bimodal stimulus presentation increase ERP components usable in BCIs?

    NASA Astrophysics Data System (ADS)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Blankertz, Benjamin; Werkhoven, Peter J.

    2012-08-01

    Event-related potential (ERP)-based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. Typically, visual stimuli are used. Tactile stimuli have recently been suggested as a gaze-independent alternative. Bimodal stimuli could evoke additional brain activity due to multisensory integration which may be of use in BCIs. We investigated the effect of visual-tactile stimulus presentation on the chain of ERP components, BCI performance (classification accuracies and bitrates) and participants’ task performance (counting of targets). Ten participants were instructed to navigate a visual display by attending (spatially) to targets in sequences of either visual, tactile or visual-tactile stimuli. We observe that attending to visual-tactile (compared to either visual or tactile) stimuli results in an enhanced early ERP component (N1). This bimodal N1 may enhance BCI performance, as suggested by a nonsignificant positive trend in offline classification accuracies. A late ERP component (P300) is reduced when attending to visual-tactile compared to visual stimuli, which is consistent with the nonsignificant negative trend of participants’ task performance. We discuss these findings in the light of affected spatial attention at high-level compared to low-level stimulus processing. Furthermore, we evaluate bimodal BCIs from a practical perspective and for future applications.

  9. Decreased visual detection during subliminal stimulation.

    PubMed

    Bareither, Isabelle; Villringer, Arno; Busch, Niko A

    2014-10-17

    What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.

  10. Examining the cognitive demands of analogy instructions compared to explicit instructions.

    PubMed

    Tse, Choi Yeung Andy; Wong, Andus; Whitehill, Tara; Ma, Estella; Masters, Rich

    2016-10-01

    In many learning domains, instructions are presented explicitly despite high cognitive demands associated with their processing. This study examined cognitive demands imposed on working memory by different types of instruction to speak with maximum pitch variation: visual analogy, verbal analogy and explicit verbal instruction. Forty participants were asked to memorise a set of 16 visual and verbal stimuli while reading aloud a Cantonese paragraph with maximum pitch variation. Instructions about how to achieve maximum pitch variation were presented via visual analogy, verbal analogy, explicit rules or no instruction. Pitch variation was assessed off-line, using standard deviation of fundamental frequency. Immediately after reading, participants recalled as many stimuli as possible. Analogy instructions resulted in significantly increased pitch variation compared to explicit instructions or no instructions. Explicit instructions resulted in poorest recall of stimuli. Visual analogy instructions resulted in significantly poorer recall of visual stimuli than verbal stimuli. The findings suggest that non-propositional instructions presented via analogy may be less cognitively demanding than instructions that are presented explicitly. Processing analogy instructions that are presented as a visual representation is likely to load primarily visuospatial components of working memory rather than phonological components. The findings are discussed with reference to speech therapy and human cognition.

  11. Relativistic compression and expansion of experiential time in the left and right space.

    PubMed

    Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano

    2008-03-05

    Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.

  12. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  13. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  14. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  15. Binocular coordination in response to stereoscopic stimuli

    NASA Astrophysics Data System (ADS)

    Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.

    2009-02-01

    Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.

  16. Visual Presentation Effects on Identification of Multiple Environmental Sounds

    PubMed Central

    Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio

    2016-01-01

    This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478

  17. Sex Differences in Response to Visual Sexual Stimuli: A Review

    PubMed Central

    Rupp, Heather A.; Wallen, Kim

    2009-01-01

    This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311

  18. Physiological and behavioral reactions elicited by simulated and real-life visual and acoustic helicopter stimuli in dairy goats

    PubMed Central

    2011-01-01

    Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239

  19. The Presentation Location of the Reference Stimuli Affects the Left-Side Bias in the Processing of Faces and Chinese Characters

    PubMed Central

    Li, Chenglin; Cao, Xiaohua

    2017-01-01

    For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different. PMID:29018391

  20. The Presentation Location of the Reference Stimuli Affects the Left-Side Bias in the Processing of Faces and Chinese Characters.

    PubMed

    Li, Chenglin; Cao, Xiaohua

    2017-01-01

    For faces and Chinese characters, a left-side processing bias, in which observers rely more heavily on information conveyed by the left side of stimuli than the right side of stimuli, has been frequently reported in previous studies. However, it remains unclear whether this left-side bias effect is modulated by the reference stimuli's location. The present study adopted the chimeric stimuli task to investigate the influence of the presentation location of the reference stimuli on the left-side bias in face and Chinese character processing. The results demonstrated that when a reference face was presented in the left visual field of its chimeric images, which are centrally presented, the participants showed a preference higher than the no-bias threshold for the left chimeric face; this effect, however, was not observed in the right visual field. This finding indicates that the left-side bias effect in face processing is stronger when the reference face is in the left visual field. In contrast, the left-side bias was observed in Chinese character processing when the reference Chinese character was presented in either the left or right visual field. Together, these findings suggest that although faces and Chinese characters both have a left-side processing bias, the underlying neural mechanisms of this left-side bias might be different.

  1. The Bank of Standardized Stimuli (BOSS), a New Set of 480 Normative Photos of Objects to Be Used as Visual Stimuli in Cognitive Research

    PubMed Central

    Brodeur, Mathieu B.; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin

    2010-01-01

    There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli. PMID:20532245

  2. The Bank of Standardized Stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research.

    PubMed

    Brodeur, Mathieu B; Dionne-Dostie, Emmanuelle; Montreuil, Tina; Lepage, Martin

    2010-05-24

    There are currently stimuli with published norms available to study several psychological aspects of language and visual cognitions. Norms represent valuable information that can be used as experimental variables or systematically controlled to limit their potential influence on another experimental manipulation. The present work proposes 480 photo stimuli that have been normalized for name, category, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. Stimuli are also available in grayscale, blurred, scrambled, and line-drawn version. This set of objects, the Bank Of Standardized Stimuli (BOSS), was created specifically to meet the needs of scientists in cognition, vision and psycholinguistics who work with photo stimuli.

  3. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  4. Upright face-preferential high-gamma responses in lower-order visual areas: evidence from intracranial recordings in children

    PubMed Central

    Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi

    2015-01-01

    Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446

  5. Gestalt Perceptual Organization of Visual Stimuli Captures Attention Automatically: Electrophysiological Evidence

    PubMed Central

    Marini, Francesco; Marzi, Carlo A.

    2016-01-01

    The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs) into unitary objects (e.g., forms and shapes). The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs). We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt) stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3) starting around 150 ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages. Highlights We studied the neural signatures of the automatic processes of visual attention elicited by Gestalt stimuli. We found that a reliable early correlate of attentional capture turned out to be the N2pc component. Perceptual and cognitive processing of Gestalt stimuli is associated with larger N1, N2, and P3 PMID:27630555

  6. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  7. Fixating at far distance shortens reaction time to peripheral visual stimuli at specific locations.

    PubMed

    Kokubu, Masahiro; Ando, Soichi; Oda, Shingo

    2018-01-18

    The purpose of the present study was to examine whether the fixation distance in real three-dimensional space affects manual reaction time to peripheral visual stimuli. Light-emitting diodes were used for presenting a fixation point and four peripheral visual stimuli. The visual stimuli were located at a distance of 45cm and at 25° in the left, right, upper, and lower directions from the sagittal axis including the fixation point. Near (30cm), Middle (45cm), Far (90cm), and Very Far (300cm) fixation distance conditions were used. When one of the four visual stimuli was randomly illuminated, the participants released a button as quickly as possible. Results showed that overall peripheral reaction time decreased as the fixation distance increased. The significant interaction between fixation distance and stimulus location indicated that the effect of fixation distance on reaction time was observed at the left, right, and upper locations but not at the lower location. These results suggest that fixating at far distance would contribute to faster reaction and that the effect is specific to locations in the peripheral visual field. The present findings are discussed in terms of viewer-centered representation, the focus of attention in depth, and visual field asymmetry related to neurological and psychological aspects. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Elevated audiovisual temporal interaction in patients with migraine without aura

    PubMed Central

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  9. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  10. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    PubMed

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. The eye-tracking of social stimuli in patients with Rett syndrome and autism spectrum disorders: a pilot study.

    PubMed

    Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana

    2015-05-01

    To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.

  12. Does Presentation Format Influence Visual Size Discrimination in Tufted Capuchin Monkeys (Sapajus spp.)?

    PubMed Central

    Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel

    2015-01-01

    Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363

  13. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    PubMed

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  14. Orienting attention in visual space by nociceptive stimuli: investigation with a temporal order judgment task based on the adaptive PSI method.

    PubMed

    Filbrich, Lieve; Alamia, Andrea; Burns, Soline; Legrain, Valéry

    2017-07-01

    Despite their high relevance for defending the integrity of the body, crossmodal links between nociception, the neural system specifically coding potentially painful information, and vision are still poorly studied, especially the effects of nociception on visual perception. This study investigated if, and in which time window, a nociceptive stimulus can attract attention to its location on the body, independently of voluntary control, to facilitate the processing of visual stimuli occurring in the same side of space as the limb on which the visual stimulus was applied. In a temporal order judgment task based on an adaptive procedure, participants judged which of two visual stimuli, one presented next to either hand in either side of space, had been perceived first. Each pair of visual stimuli was preceded (by 200, 400, or 600 ms) by a nociceptive stimulus applied either unilaterally on one single hand, or bilaterally, on both hands simultaneously. Results show that, as compared to the bilateral condition, participants' judgments were biased to the advantage of the visual stimuli that occurred in the same side of space as the hand on which a unilateral, nociceptive stimulus was applied. This effect was present in a time window ranging from 200 to 600 ms, but importantly, biases increased with decreasing time interval. These results suggest that nociceptive stimuli can affect the perceptual processing of spatially congruent visual inputs.

  15. Explaining the Colavita visual dominance effect.

    PubMed

    Spence, Charles

    2009-01-01

    The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.

  16. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  17. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  18. Visual feedback in stuttering therapy

    NASA Astrophysics Data System (ADS)

    Smolka, Elzbieta

    1997-02-01

    The aim of this paper is to present the results concerning the influence of visual echo and reverberation on the speech process of stutterers. Visual stimuli along with the influence of acoustic and visual-acoustic stimuli have been compared. Following this the methods of implementing visual feedback with the aid of electroluminescent diodes directed by speech signals have been presented. The concept of a computerized visual echo based on the acoustic recognition of Polish syllabic vowels has been also presented. All the research nd trials carried out at our center, aside from cognitive aims, generally aim at the development of new speech correctors to be utilized in stuttering therapy.

  19. Shades of yellow: interactive effects of visual and odour cues in a pest beetle

    PubMed Central

    Stevenson, Philip C.; Belmain, Steven R.

    2016-01-01

    Background: The visual ecology of pest insects is poorly studied compared to the role of odour cues in determining their behaviour. Furthermore, the combined effects of both odour and vision on insect orientation are frequently ignored, but could impact behavioural responses. Methods: A locomotion compensator was used to evaluate use of different visual stimuli by a major coleopteran pest of stored grains (Sitophilus zeamais), with and without the presence of host odours (known to be attractive to this species), in an open-loop setup. Results: Some visual stimuli—in particular, one shade of yellow, solid black and high-contrast black-against-white stimuli—elicited positive orientation behaviour from the beetles in the absence of odour stimuli. When host odours were also present, at 90° to the source of the visual stimulus, the beetles presented with yellow and vertical black-on-white grating patterns changed their walking course and typically adopted a path intermediate between the two stimuli. The beetles presented with a solid black-on-white target continued to orient more strongly towards the visual than the odour stimulus. Discussion: Visual stimuli can strongly influence orientation behaviour, even in species where use of visual cues is sometimes assumed to be unimportant, while the outcomes from exposure to multimodal stimuli are unpredictable and need to be determined under differing conditions. The importance of the two modalities of stimulus (visual and olfactory) in food location is likely to depend upon relative stimulus intensity and motivational state of the insect. PMID:27478707

  20. Body Context and Posture Affect Mental Imagery of Hands

    PubMed Central

    Ionta, Silvio; Perruchoud, David; Draganski, Bogdan; Blanke, Olaf

    2012-01-01

    Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively. PMID:22479618

  1. Effects of Presentation Type and Visual Control in Numerosity Discrimination: Implications for Number Processing?

    PubMed Central

    Smets, Karolien; Moors, Pieter; Reynvoet, Bert

    2016-01-01

    Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967

  2. Age-related differences in audiovisual interactions of semantically different stimuli.

    PubMed

    Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo

    2017-01-01

    Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. The role of prestimulus activity in visual extinction☆

    PubMed Central

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-01-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. PMID:23680398

  4. The role of prestimulus activity in visual extinction.

    PubMed

    Urner, Maren; Sarri, Margarita; Grahn, Jessica; Manly, Tom; Rees, Geraint; Friston, Karl

    2013-07-01

    Patients with visual extinction following right-hemisphere damage sometimes see and sometimes miss stimuli in the left visual field, particularly when stimuli are presented simultaneously to both visual fields. Awareness of left visual field stimuli is associated with increased activity in bilateral parietal and frontal cortex. However, it is unknown why patients see or miss these stimuli. Previous neuroimaging studies in healthy adults show that prestimulus activity biases perceptual decisions, and biases in visual perception can be attributed to fluctuations in prestimulus activity in task relevant brain regions. Here, we used functional MRI to investigate whether prestimulus activity affected perception in the context of visual extinction following stroke. We measured prestimulus activity in stimulus-responsive cortical areas during an extinction paradigm in a patient with unilateral right parietal damage and visual extinction. This allowed us to compare prestimulus activity on physically identical bilateral trials that either did or did not lead to visual extinction. We found significantly increased activity prior to stimulus presentation in two areas that were also activated by visual stimulation: the left calcarine sulcus and right occipital inferior cortex. Using dynamic causal modelling (DCM) we found that both these differences in prestimulus activity and stimulus evoked responses could be explained by enhanced effective connectivity within and between visual areas, prior to stimulus presentation. Thus, we provide evidence for the idea that differences in ongoing neural activity in visually responsive areas prior to stimulus onset affect awareness in visual extinction, and that these differences are mediated by fluctuations in extrinsic and intrinsic connectivity. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. The dynamic-stimulus advantage of visual symmetry perception.

    PubMed

    Niimi, Ryosuke; Watanabe, Katsumi; Yokosawa, Kazuhiko

    2008-09-01

    It has been speculated that visual symmetry perception from dynamic stimuli involves mechanisms different from those for static stimuli. However, previous studies found no evidence that dynamic stimuli lead to active temporal processing and improve symmetry detection. In this study, four psychophysical experiments investigated temporal processing in symmetry perception using both dynamic and static stimulus presentations of dot patterns. In Experiment 1, rapid successive presentations of symmetric patterns (e.g., 16 patterns per 853 ms) produced more accurate discrimination of orientations of symmetry axes than static stimuli (single pattern presented through 853 ms). In Experiments 2-4, we confirmed that the dynamic-stimulus advantage depended upon presentation of a large number of unique patterns within a brief period (853 ms) in the dynamic conditions. Evidently, human vision takes advantage of temporal processing for symmetry perception from dynamic stimuli.

  6. Testing memory for unseen visual stimuli in patients with extinction and spatial neglect.

    PubMed

    Vuilleumier, Patrik; Schwartz, Sophie; Clarke, Karen; Husain, Masud; Driver, Jon

    2002-08-15

    Visual extinction after right parietal damage involves a loss of awareness for stimuli in the contralesional field when presented concurrently with ipsilesional stimuli, although contralesional stimuli are still perceived if presented alone. However, extinguished stimuli can still receive some residual on-line processing, without awareness. Here we examined whether such residual processing of extinguished stimuli can produce implicit and/or explicit memory traces lasting many minutes. We tested four patients with right parietal damage and left extinction on two sessions, each including distinct study and subsequent test phases. At study, pictures of objects were shown briefly in the right, left, or both fields. Patients were asked to name them without memory instructions (Session 1) or to make an indoor/outdoor categorization and memorize them (Session 2). They extinguished most left stimuli on bilateral presentation. During the test (up to 48 min later), fragmented pictures of the previously exposed objects (or novel objects) were presented alone in either field. Patients had to identify each object and then judge whether it had previously been exposed. Identification of fragmented pictures was better for previously exposed objects that had been consciously seen and critically also for objects that had been extinguished (as compared with novel objects), with no influence of the depth of processing during study. By contrast, explicit recollection occurred only for stimuli that were consciously seen at study and increased with depth of processing. These results suggest implicit but not explicit memory for extinguished visual stimuli in parietal patients.

  7. Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments

    PubMed Central

    Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.

    2015-01-01

    Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988

  8. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.

    PubMed

    Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H

    2013-07-01

    Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.

  9. Physical Features of Visual Images Affect Macaque Monkey’s Preference for These Images

    PubMed Central

    Funahashi, Shintaro

    2016-01-01

    Animals exhibit different degrees of preference toward various visual stimuli. In addition, it has been shown that strongly preferred stimuli can often act as a reward. The aim of the present study was to determine what features determine the strength of the preference for visual stimuli in order to examine neural mechanisms of preference judgment. We used 50 color photographs obtained from the Flickr Material Database (FMD) as original stimuli. Four macaque monkeys performed a simple choice task, in which two stimuli selected randomly from among the 50 stimuli were simultaneously presented on a monitor and monkeys were required to choose either stimulus by eye movements. We considered that the monkeys preferred the chosen stimulus if it continued to look at the stimulus for an additional 6 s and calculated a choice ratio for each stimulus. Each monkey exhibited a different choice ratio for each of the original 50 stimuli. They tended to select clear, colorful and in-focus stimuli. Complexity and clarity were stronger determinants of preference than colorfulness. Images that included greater amounts of spatial frequency components were selected more frequently. These results indicate that particular physical features of the stimulus can affect the strength of a monkey’s preference and that the complexity, clarity and colorfulness of the stimulus are important determinants of this preference. Neurophysiological studies would be needed to examine whether these features of visual stimuli produce more activation in neurons that participate in this preference judgment. PMID:27853424

  10. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    PubMed

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  11. [Ventriloquism and audio-visual integration of voice and face].

    PubMed

    Yokosawa, Kazuhiko; Kanaya, Shoko

    2012-07-01

    Presenting synchronous auditory and visual stimuli in separate locations creates the illusion that the sound originates from the direction of the visual stimulus. Participants' auditory localization bias, called the ventriloquism effect, has revealed factors affecting the perceptual integration of audio-visual stimuli. However, many studies on audio-visual processes have focused on performance in simplified experimental situations, with a single stimulus in each sensory modality. These results cannot necessarily explain our perceptual behavior in natural scenes, where various signals exist within a single sensory modality. In the present study we report the contributions of a cognitive factor, that is, the audio-visual congruency of speech, although this factor has often been underestimated in previous ventriloquism research. Thus, we investigated the contribution of speech congruency on the ventriloquism effect using a spoken utterance and two videos of a talking face. The salience of facial movements was also manipulated. As a result, when bilateral visual stimuli are presented in synchrony with a single voice, cross-modal speech congruency was found to have a significant impact on the ventriloquism effect. This result also indicated that more salient visual utterances attracted participants' auditory localization. The congruent pairing of audio-visual utterances elicited greater localization bias than did incongruent pairing, whereas previous studies have reported little dependency on the reality of stimuli in ventriloquism. Moreover, audio-visual illusory congruency, owing to the McGurk effect, caused substantial visual interference to auditory localization. This suggests that a greater flexibility in responding to multi-sensory environments exists than has been previously considered.

  12. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    PubMed

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  13. Front-Presented Looming Sound Selectively Alters the Perceived Size of a Visual Looming Object.

    PubMed

    Yamasaki, Daiki; Miyoshi, Kiyofumi; Altmann, Christian F; Ashida, Hiroshi

    2018-07-01

    In spite of accumulating evidence for the spatial rule governing cross-modal interaction according to the spatial consistency of stimuli, it is still unclear whether 3D spatial consistency (i.e., front/rear of the body) of stimuli also regulates audiovisual interaction. We investigated how sounds with increasing/decreasing intensity (looming/receding sound) presented from the front and rear space of the body impact the size perception of a dynamic visual object. Participants performed a size-matching task (Experiments 1 and 2) and a size adjustment task (Experiment 3) of visual stimuli with increasing/decreasing diameter, while being exposed to a front- or rear-presented sound with increasing/decreasing intensity. Throughout these experiments, we demonstrated that only the front-presented looming sound caused overestimation of the spatially consistent looming visual stimulus in size, but not of the spatially inconsistent and the receding visual stimulus. The receding sound had no significant effect on vision. Our results revealed that looming sound alters dynamic visual size perception depending on the consistency in the approaching quality and the front-rear spatial location of audiovisual stimuli, suggesting that the human brain differently processes audiovisual inputs based on their 3D spatial consistency. This selective interaction between looming signals should contribute to faster detection of approaching threats. Our findings extend the spatial rule governing audiovisual interaction into 3D space.

  14. Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects.

    PubMed

    Soldan, Anja; Mangels, Jennifer A; Cooper, Lynn A

    2008-11-01

    According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalise to priming of unfamiliar visual objects. Implications for theoretical models of object decision priming are discussed.

  15. Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects

    PubMed Central

    Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.

    2008-01-01

    According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167

  16. Duration estimates within a modality are integrated sub-optimally

    PubMed Central

    Cai, Ming Bo; Eagleman, David M.

    2015-01-01

    Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965

  17. Accessory stimulus modulates executive function during stepping task

    PubMed Central

    Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo

    2015-01-01

    When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321

  18. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    PubMed

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  19. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Theta Oscillations in Visual Cortex Emerge with Experience to Convey Expected Reward Time and Experienced Reward Rate

    PubMed Central

    Zold, Camila L.

    2015-01-01

    The primary visual cortex (V1) is widely regarded as faithfully conveying the physical properties of visual stimuli. Thus, experience-induced changes in V1 are often interpreted as improving visual perception (i.e., perceptual learning). Here we describe how, with experience, cue-evoked oscillations emerge in V1 to convey expected reward time as well as to relate experienced reward rate. We show, in chronic multisite local field potential recordings from rat V1, that repeated presentation of visual cues induces the emergence of visually evoked oscillatory activity. Early in training, the visually evoked oscillations relate to the physical parameters of the stimuli. However, with training, the oscillations evolve to relate the time in which those stimuli foretell expected reward. Moreover, the oscillation prevalence reflects the reward rate recently experienced by the animal. Thus, training induces experience-dependent changes in V1 activity that relate to what those stimuli have come to signify behaviorally: when to expect future reward and at what rate. PMID:26134643

  1. Repetition priming of face recognition in a serial choice reaction-time task.

    PubMed

    Roberts, T; Bruce, V

    1989-05-01

    Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.

  2. Brain activation by visual erotic stimuli in healthy middle aged males.

    PubMed

    Kim, S W; Sohn, D W; Cho, Y-H; Yang, W S; Lee, K-U; Juh, R; Ahn, K-J; Chung, Y-A; Han, S-I; Lee, K H; Lee, C U; Chae, J-H

    2006-01-01

    The objective of the present study was to identify brain centers, whose activity changes are related to erotic visual stimuli in healthy, heterosexual, middle aged males. Ten heterosexual, right-handed males with normal sexual function were entered into the present study (mean age 52 years, range 46-55). All potential subjects were screened over 1 h interview, and were encouraged to fill out questionnaires including the Brief Male Sexual Function Inventory. All subjects with a history of sexual arousal disorder or erectile dysfunction were excluded. We performed functional brain magnetic resonance imaging (fMRI) in male volunteers when an alternatively combined erotic and nonerotic film was played for 14 min and 9 s. The major areas of activation associated with sexual arousal to visual stimuli were occipitotemporal area, anterior cingulate gyrus, insula, orbitofrontal cortex, caudate nucleus. However, hypothalamus and thalamus were not activated. We suggest that the nonactivation of hypothalamus and thalamus in middle aged males may be responsible for the lesser physiological arousal in response to the erotic visual stimuli.

  3. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  4. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements.

    PubMed

    Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R

    2017-07-01

    Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.

    PubMed

    Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor

    2015-04-01

    Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.

  6. Temporal attention is involved in the enhancement of attentional capture with task difficulty: an event-related brain potential study.

    PubMed

    Sugimoto, Fumie; Kimura, Motohiro; Takeda, Yuji; Katayama, Jun'ichi

    2017-08-16

    In a three-stimulus oddball task, the amplitude of P3a elicited by deviant stimuli increases with an increase in the difficulty of discriminating between standard and target stimuli (i.e. task-difficulty effect on P3a), indicating that attentional capture by deviant stimuli is enhanced with an increase in task difficulty. This enhancement of attentional capture may be explained in terms of the modulation of modality-nonspecific temporal attention; that is, the participant's attention directed to the predicted timing of stimulus presentation is stronger when the task difficulty increases, which results in enhanced attentional capture. The present study examined this possibility with a modified three-stimulus oddball task consisting of a visual standard, a visual target, and four types of deviant stimuli defined by a combination of two modalities (visual and auditory) and two presentation timings (predicted and unpredicted). We expected that if the modulation of temporal attention is involved in enhanced attentional capture, then the task-difficulty effect on P3a should be reduced for unpredicted compared with predicted deviant stimuli irrespective of their modality; this is because the influence of temporal attention should be markedly weaker for unpredicted compared with predicted deviant stimuli. The results showed that the task-difficulty effect on P3a was significantly reduced for unpredicted compared with predicted deviant stimuli in both the visual and the auditory modalities. This result suggests that the modulation of modality-nonspecific temporal attention induced by the increase in task difficulty is at least partly involved in the enhancement of attentional capture by deviant stimuli.

  7. The effect of two different visual presentation modalities on the narratives of mainstream grade 3 children.

    PubMed

    Klop, D; Engelbrecht, L

    2013-12-01

    This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.

  8. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    ERIC Educational Resources Information Center

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  9. Differences in apparent straightness of dot and line stimuli.

    NASA Technical Reports Server (NTRS)

    Parlee, M. B.

    1972-01-01

    An investigation has been made of anisotropic responses to contoured and noncontoured stimuli to obtain an insight into the way these stimuli are processed. For this purpose, eight subjects judged the alignment of minimally contoured (3 dot) and contoured (line) stimuli. Stimuli, presented to each eye separately, vertically subtended either 8 or 32 deg visual angle and were located 10 deg left, center, or 10 deg right in the visual field. Location-dependent deviations from physical straightness were larger for dot stimuli than for lines. The results were the same for the two eyes. In a second experiment, subjects judged the alignment of stimuli composed of different densities of dots. Apparent straightness for these stimuli was the same as for lines. The results are discussed in terms of alternative mechanisms for analysis of contoured and minimally contoured stimuli.

  10. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    PubMed

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  11. Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.

    PubMed

    Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D

    2011-10-30

    Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Inverse Target- and Cue-Priming Effects of Masked Stimuli

    ERIC Educational Resources Information Center

    Mattler, Uwe

    2007-01-01

    The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses…

  13. Postural time-to-contact as a precursor of visually induced motion sickness.

    PubMed

    Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A

    2018-06-01

    The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.

  14. Endogenous Sequential Cortical Activity Evoked by Visual Stimuli

    PubMed Central

    Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael

    2015-01-01

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915

  15. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  16. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  17. Using task effort and pupil size to track covert shifts of visual attention independently of a pupillary light reflex.

    PubMed

    Brocher, Andreas; Harbecke, Raphael; Graf, Tim; Memmert, Daniel; Hüttermann, Stefanie

    2018-03-07

    We tested the link between pupil size and the task effort involved in covert shifts of visual attention. The goal of this study was to establish pupil size as a marker of attentional shifting in the absence of luminance manipulations. In three experiments, participants evaluated two stimuli that were presented peripherally, appearing equidistant from and on opposite sides of eye fixation. The angle between eye fixation and the peripherally presented target stimuli varied from 12.5° to 42.5°. The evaluation of more distant stimuli led to poorer performance than did the evaluation of more proximal stimuli throughout our study, confirming that the former required more effort than the latter. In addition, in Experiment 1 we found that pupil size increased with increasing angle and that this effect could not be reduced to the operation of low-level visual processes in the task. In Experiment 2 the pupil dilated more strongly overall when participants evaluated the target stimuli, which required shifts of attention, than when they merely reported on the target's presence versus absence. Both conditions yielded larger pupils for more distant than for more proximal stimuli, however. In Experiment 3, we manipulated task difficulty more directly, by changing the contrast at which the target stimuli were presented. We replicated the results from Experiment 1 only with the high-contrast stimuli. With stimuli of low contrast, ceiling effects in pupil size were observed. Our data show that the link between task effort and pupil size can be used to track the degree to which an observer covertly shifts attention to or detects stimuli in peripheral vision.

  18. Semantic congruency and the (reversed) Colavita effect in children and adults.

    PubMed

    Wille, Claudia; Ebersbach, Mirjam

    2016-01-01

    When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    PubMed

    Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A

    2016-01-01

    Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  20. Determining the Capacity of Time-Based Selection

    ERIC Educational Resources Information Center

    Watson, Derrick G.; Kunar, Melina A.

    2012-01-01

    In visual search, a set of distractor items can be suppressed from future selection if they are presented (previewed) before a second set of search items arrive. This "visual marking" mechanism provides a top-down way of prioritizing the selection of new stimuli, at the expense of old stimuli already in the field (Watson & Humphreys,…

  1. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.

    PubMed

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-04-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.

  2. Behavioral assessment of emotional and motivational appraisal during visual processing of emotional scenes depending on spatial frequencies.

    PubMed

    Fradcourt, B; Peyrin, C; Baciu, M; Campagne, A

    2013-10-01

    Previous studies performed on visual processing of emotional stimuli have revealed preference for a specific type of visual spatial frequencies (high spatial frequency, HSF; low spatial frequency, LSF) according to task demands. The majority of studies used a face and focused on the appraisal of the emotional state of others. The present behavioral study investigates the relative role of spatial frequencies on processing emotional natural scenes during two explicit cognitive appraisal tasks, one emotional, based on the self-emotional experience and one motivational, based on the tendency to action. Our results suggest that HSF information was the most relevant to rapidly identify the self-emotional experience (unpleasant, pleasant, and neutral) while LSF was required to rapidly identify the tendency to action (avoidance, approach, and no action). The tendency to action based on LSF analysis showed a priority for unpleasant stimuli whereas the identification of emotional experience based on HSF analysis showed a priority for pleasant stimuli. The present study confirms the interest of considering both emotional and motivational characteristics of visual stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study

    PubMed Central

    Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.

    2011-01-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011

  4. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  5. Lateral eye-movement responses to visual stimuli.

    PubMed

    Wilbur, M P; Roberts-Wilbur, J

    1985-08-01

    The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.

  6. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  7. Neural Responses in Parietal and Occipital Areas in Response to Visual Events Are Modulated by Prior Multisensory Stimuli

    PubMed Central

    Innes-Brown, Hamish; Barutchu, Ayla; Crewther, David P.

    2013-01-01

    The effect of multi-modal vs uni-modal prior stimuli on the subsequent processing of a simple flash stimulus was studied in the context of the audio-visual ‘flash-beep’ illusion, in which the number of flashes a person sees is influenced by accompanying beep stimuli. EEG recordings were made while combinations of simple visual and audio-visual stimuli were presented. The experiments found that the electric field strength related to a flash stimulus was stronger when it was preceded by a multi-modal flash/beep stimulus, compared to when it was preceded by another uni-modal flash stimulus. This difference was found to be significant in two distinct timeframes – an early timeframe, from 130–160 ms, and a late timeframe, from 300–320 ms. Source localisation analysis found that the increased activity in the early interval was localised to an area centred on the inferior and superior parietal lobes, whereas the later increase was associated with stronger activity in an area centred on primary and secondary visual cortex, in the occipital lobe. The results suggest that processing of a visual stimulus can be affected by the presence of an immediately prior multisensory event. Relatively long-lasting interactions generated by the initial auditory and visual stimuli altered the processing of a subsequent visual stimulus. PMID:24391939

  8. Measuring Software Timing Errors in the Presentation of Visual Stimuli in Cognitive Neuroscience Experiments

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego; Matute, Helena

    2014-01-01

    Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments. PMID:24409318

  9. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    PubMed

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  10. Electrophysiological evidence for the left-lateralized effect of language on preattentive categorical perception of color

    PubMed Central

    Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai

    2011-01-01

    Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340

  11. Influence of cognitive style and interstimulus interval on the hemispheric processing of tactile stimuli.

    PubMed

    Minagawa, N; Kashu, K

    1989-06-01

    16 adult subjects performed a tactile recognition task. According to our 1984 study, half of the subjects were classified as having a left hemispheric preference for the processing of visual stimuli, while the other half were classified as having a right hemispheric preference for the processing of visual stimuli. The present task was conducted according to the S1-S2 matching paradigm. The standard stimulus was a readily recognizable object and was presented tactually to either the left or right hand of each subject. The comparison stimulus was an object-picture and was presented visually by slide in a tachistoscope. The interstimulus interval was .05 sec. or 2.5 sec. Analysis indicated that the left-preference group showed right-hand superiority, and the right-preference group showed left-hand superiority. The notion of individual hemisphericity was supported in tactile processing.

  12. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    PubMed

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Visual artificial grammar learning: comparative research on humans, kea (Nestor notabilis) and pigeons (Columba livia)

    PubMed Central

    Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh

    2012-01-01

    Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635

  14. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    PubMed

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  15. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging

    PubMed Central

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-01-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629

  16. Visual adaptation and novelty responses in the superior colliculus

    PubMed Central

    Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.

    2011-01-01

    The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319

  17. Implications of Subliminal Classical Conditioning for Defeating the Use of Countermeasures in the Detection of Deception: Subliminal Evaluation

    DTIC Science & Technology

    1993-08-01

    presented emotional stimuli than for subliminally presented neutral stimuli. Emotional stimuli consisted of sexually charged photographs, and the neutral...behavior. In addition to research using visual stimuli, some 13 studies have been conducted using subliminal (masked by 40 dB white noise) auditory ...deactivating suggestions masked by a 40-dB white noise signal. For the deactivating subliminal auditory messages, suggestions of heaviness and warmth

  18. Neural responses to salient visual stimuli.

    PubMed Central

    Morris, J S; Friston, K J; Dolan, R J

    1997-01-01

    The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546

  19. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    PubMed

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.

  20. Audiovisual Temporal Processing and Synchrony Perception in the Rat

    PubMed Central

    Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.

    2017-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580

  1. Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study

    PubMed Central

    Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.

    2012-01-01

    Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014

  2. Selective visual attention and motivation: the consequences of value learning in an attentional blink task.

    PubMed

    Raymond, Jane E; O'Brien, Jennifer L

    2009-08-01

    Learning to associate the probability and value of behavioral outcomes with specific stimuli (value learning) is essential for rational decision making. However, in demanding cognitive conditions, access to learned values might be constrained by limited attentional capacity. We measured recognition of briefly presented faces seen previously in a value-learning task involving monetary wins and losses; the recognition task was performed both with and without constraints on available attention. Regardless of available attention, recognition was substantially enhanced for motivationally salient stimuli (i.e., stimuli highly predictive of outcomes), compared with equally familiar stimuli that had weak or no motivational salience, and this effect was found regardless of valence (win or loss). However, when attention was constrained (because stimuli were presented during an attentional blink, AB), valence determined recognition; win-associated faces showed no AB, but all other faces showed large ABs. Motivational salience acts independently of attention to modulate simple perceptual decisions, but when attention is limited, visual processing is biased in favor of reward-associated stimuli.

  3. The course of visual searching to a target in a fixed location: electrophysiological evidence from an emotional flanker task.

    PubMed

    Dong, Guangheng; Yang, Lizhu; Shen, Yue

    2009-08-21

    The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.

  4. Behavioral evidence for inter-hemispheric cooperation during a lexical decision task: a divided visual field experiment.

    PubMed

    Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica

    2013-01-01

    HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing.

  5. Behavioral evidence for inter-hemispheric cooperation during a lexical decision task: a divided visual field experiment

    PubMed Central

    Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica

    2013-01-01

    HIGHLIGHTS The redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing. PMID:23818879

  6. BOLD responses in reward regions to hypothetical and imaginary monetary rewards

    PubMed Central

    Miyapuram, Krishna P.; Tobler, Philippe N.; Gregorios-Pippas, Lucy; Schultz, Wolfram

    2015-01-01

    Monetary rewards are uniquely human. Because money is easy to quantify and present visually, it is the reward of choice for most fMRI studies, even though it cannot be handed over to participants inside the scanner. A typical fMRI study requires hundreds of trials and thus small amounts of monetary rewards per trial (e.g. 5p) if all trials are to be treated equally. However, small payoffs can have detrimental effects on performance due to their limited buying power. Hypothetical monetary rewards can overcome the limitations of smaller monetary rewards but it is less well known whether predictors of hypothetical rewards activate reward regions. In two experiments, visual stimuli were associated with hypothetical monetary rewards. In Experiment 1, we used stimuli predicting either visually presented or imagined hypothetical monetary rewards, together with non-rewarding control pictures. Activations to reward predictive stimuli occurred in reward regions, namely the medial orbitofrontal cortex and midbrain. In Experiment 2, we parametrically varied the amount of visually presented hypothetical monetary reward keeping constant the amount of actually received reward. Graded activation in midbrain was observed to stimuli predicting increasing hypothetical rewards. The results demonstrate the efficacy of using hypothetical monetary rewards in fMRI studies. PMID:21985912

  7. BOLD responses in reward regions to hypothetical and imaginary monetary rewards.

    PubMed

    Miyapuram, Krishna P; Tobler, Philippe N; Gregorios-Pippas, Lucy; Schultz, Wolfram

    2012-01-16

    Monetary rewards are uniquely human. Because money is easy to quantify and present visually, it is the reward of choice for most fMRI studies, even though it cannot be handed over to participants inside the scanner. A typical fMRI study requires hundreds of trials and thus small amounts of monetary rewards per trial (e.g. 5p) if all trials are to be treated equally. However, small payoffs can have detrimental effects on performance due to their limited buying power. Hypothetical monetary rewards can overcome the limitations of smaller monetary rewards but it is less well known whether predictors of hypothetical rewards activate reward regions. In two experiments, visual stimuli were associated with hypothetical monetary rewards. In Experiment 1, we used stimuli predicting either visually presented or imagined hypothetical monetary rewards, together with non-rewarding control pictures. Activations to reward predictive stimuli occurred in reward regions, namely the medial orbitofrontal cortex and midbrain. In Experiment 2, we parametrically varied the amount of visually presented hypothetical monetary reward keeping constant the amount of actually received reward. Graded activation in midbrain was observed to stimuli predicting increasing hypothetical rewards. The results demonstrate the efficacy of using hypothetical monetary rewards in fMRI studies. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Visual and vestibular components of motion sickness.

    PubMed

    Eyeson-Annan, M; Peterken, C; Brown, B; Atchison, D

    1996-10-01

    The relative importance of visual and vestibular information in the etiology of motion sickness (MS) is not well understood, but these factors can be manipulated by inducing Coriolis and pseudo-Coriolis effects in experimental subjects. We hypothesized that visual and vestibular information are equivalent in producing MS. The experiments reported here aim, in part, to examine the relative influence of Coriolis and pseudo-Coriolis effects in inducing MS. We induced MS symptoms by combinations of whole body rotation and tilt, and environment rotation and tilt, in 22 volunteer subjects. Subjects participated in all of the experiments with at least 2 d between each experiment to dissipate after-effects. We recorded MS signs and symptoms when only visual stimulation was applied, when only vestibular stimulation was applied, and when both visual and vestibular stimulation were applied under specific conditions of whole body and environmental tilt. Visual stimuli produced more symptoms of MS than vestibular stimuli when only visual or vestibular stimuli were used (ANOVA F = 7.94, df = 1, 21 p = 0.01), but there was no significant difference in MS production when combined visual and vestibular stimulation were used to produce the Coriolis effect or pseudo-Coriolis effect (ANOVA: F = 0.40, df = 1, 21 p = 0.53). This was further confirmed by examination of the order in which the symptoms occurred and the lack of a correlation between previous experience and visually induced MS. Visual information is more important than vestibular input in causing MS when these stimuli are presented in isolation. In conditions where both visual and vestibular information are present, cross-coupling appears to occur between the pseudo-Coriolis effect and the Coriolis effect, as these two conditions are not significantly different in producing MS symptoms.

  9. The effects of lesions of the superior colliculus on locomotor orientation and the orienting reflex in the rat.

    PubMed

    Goodale, M A; Murison, R C

    1975-05-02

    The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.

  10. Visual-auditory integration during speech imitation in autism.

    PubMed

    Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.

  11. The importance of laughing in your face: influences of visual laughter on auditory laughter perception.

    PubMed

    Jordan, Timothy R; Abedipour, Lily

    2010-01-01

    Hearing the sound of laughter is important for social communication, but processes contributing to the audibility of laughter remain to be determined. Production of laughter resembles production of speech in that both involve visible facial movements accompanying socially significant auditory signals. However, while it is known that speech is more audible when the facial movements producing the speech sound can be seen, similar visual enhancement of the audibility of laughter remains unknown. To address this issue, spontaneously occurring laughter was edited to produce stimuli comprising visual laughter, auditory laughter, visual and auditory laughter combined, and no laughter at all (either visual or auditory), all presented in four levels of background noise. Visual laughter and no-laughter stimuli produced very few reports of auditory laughter. However, visual laughter consistently made auditory laughter more audible, compared to the same auditory signal presented without visual laughter, resembling findings reported previously for speech.

  12. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  13. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects

    PubMed Central

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R.; Jafari, Amir H.

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies. PMID:29892219

  14. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects.

    PubMed

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23-30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.

  15. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  16. Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study.

    PubMed

    Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M

    2011-10-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Predicting Visual Consciousness Electrophysiologically from Intermittent Binocular Rivalry

    PubMed Central

    O’Shea, Robert P.; Kornmeier, Jürgen; Roeber, Urte

    2013-01-01

    Purpose We sought brain activity that predicts visual consciousness. Methods We used electroencephalography (EEG) to measure brain activity to a 1000-ms display of sine-wave gratings, oriented vertically in one eye and horizontally in the other. This display yields binocular rivalry: irregular alternations in visual consciousness between the images viewed by the eyes. We replaced both gratings with 200 ms of darkness, the gap, before showing a second display of the same rival gratings for another 1000 ms. We followed this by a 1000-ms mask then a 2000-ms inter-trial interval (ITI). Eleven participants pressed keys after the second display in numerous trials to say whether the orientation of the visible grating changed from before to after the gap or not. Each participant also responded to numerous non-rivalry trials in which the gratings had identical orientations for the two eyes and for which the orientation of both either changed physically after the gap or did not. Results We found that greater activity from lateral occipital-parietal-temporal areas about 180 ms after initial onset of rival stimuli predicted a change in visual consciousness more than 1000 ms later, on re-presentation of the rival stimuli. We also found that less activity from parietal, central, and frontal electrodes about 400 ms after initial onset of rival stimuli predicted a change in visual consciousness about 800 ms later, on re-presentation of the rival stimuli. There was no such predictive activity when the change in visual consciousness occurred because the stimuli changed physically. Conclusion We found early EEG activity that predicted later visual consciousness. Predictive activity 180 ms after onset of the first display may reflect adaption of the neurons mediating visual consciousness in our displays. Predictive activity 400 ms after onset of the first display may reflect a less-reliable brain state mediating visual consciousness. PMID:24124536

  18. Subliminal perception of complex visual stimuli.

    PubMed

    Ionescu, Mihai Radu

    2016-01-01

    Rationale: Unconscious perception of various sensory modalities is an active subject of research though its function and effect on behavior is uncertain. Objective: The present study tried to assess if unconscious visual perception could occur with more complex visual stimuli than previously utilized. Methods and Results: Videos containing slideshows of indifferent complex images with interspersed frames of interest of various durations were presented to 24 healthy volunteers. The perception of the stimulus was evaluated with a forced-choice questionnaire while awareness was quantified by self-assessment with a modified awareness scale annexed to each question with 4 categories of awareness. At values of 16.66 ms of stimulus duration, conscious awareness was not possible and answers regarding the stimulus were random. At 50 ms, nonrandom answers were coupled with no self-reported awareness suggesting unconscious perception of the stimulus. At larger durations of stimulus presentation, significantly correct answers were coupled with a certain conscious awareness. Discussion: At values of 50 ms, unconscious perception is possible even with complex visual stimuli. Further studies are recommended with a focus on a range of interest of stimulus duration between 50 to 16.66 ms.

  19. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  20. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  1. Attentional Capture by Emotional Stimuli Is Modulated by Semantic Processing

    ERIC Educational Resources Information Center

    Huang, Yang-Ming; Baddeley, Alan; Young, Andrew W.

    2008-01-01

    The attentional blink paradigm was used to examine whether emotional stimuli always capture attention. The processing requirement for emotional stimuli in a rapid sequential visual presentation stream was manipulated to investigate the circumstances under which emotional distractors capture attention, as reflected in an enhanced attentional blink…

  2. School-aged children can benefit from audiovisual semantic congruency during memory encoding.

    PubMed

    Heikkilä, Jenni; Tiippana, Kaisa

    2016-05-01

    Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.

  3. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  4. Visual stimulus presentation using fiber optics in the MRI scanner.

    PubMed

    Huang, Ruey-Song; Sereno, Martin I

    2008-03-30

    Imaging the neural basis of visuomotor actions using fMRI is a topic of increasing interest in the field of cognitive neuroscience. One challenge is to present realistic three-dimensional (3-D) stimuli in the subject's peripersonal space inside the MRI scanner. The stimulus generating apparatus must be compatible with strong magnetic fields and must not interfere with image acquisition. Virtual 3-D stimuli can be generated with a stereo image pair projected onto screens or via binocular goggles. Here, we describe designs and implementations for automatically presenting physical 3-D stimuli (point-light targets) in peripersonal and near-face space using fiber optics in the MRI scanner. The feasibility of fiber-optic based displays was demonstrated in two experiments. The first presented a point-light array along a slanted surface near the body, and the second presented multiple point-light targets around the face. Stimuli were presented using phase-encoded paradigms in both experiments. The results suggest that fiber-optic based displays can be a complementary approach for visual stimulus presentation in the MRI scanner.

  5. Left hemispheric advantage for numerical abilities in the bottlenose dolphin.

    PubMed

    Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur

    2005-02-28

    In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.

  6. Illusory visual motion stimulus elicits postural sway in migraine patients

    PubMed Central

    Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi

    2015-01-01

    Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832

  7. An ecological alternative to Snodgrass & Vanderwart: 360 high quality colour images with norms for seven psycholinguistic variables.

    PubMed

    Moreno-Martínez, Francisco Javier; Montoro, Pedro R

    2012-01-01

    This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.

  8. [Intermodal timing cues for audio-visual speech recognition].

    PubMed

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  9. Perceptual category learning of photographic and painterly stimuli in rhesus macaques (Macaca mulatta) and humans

    PubMed Central

    Jensen, Greg; Terrace, Herbert

    2017-01-01

    Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270

  10. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  11. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  12. Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.

    PubMed

    Isomura, Tomoko; Nakano, Tamami

    2016-12-14

    Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present. © 2016 The Author(s).

  13. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  14. Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.

    PubMed

    Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R

    2008-03-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.

  15. Language experience shapes early electrophysiological responses to visual stimuli: the effects of writing system, stimulus length, and presentation duration.

    PubMed

    Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi

    2008-02-15

    How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.

  16. Peripheral vision and perceptual asymmetries in young and older martial arts athletes and nonathletes.

    PubMed

    Muiños, Mónica; Ballesteros, Soledad

    2014-11-01

    The present study investigated peripheral vision (PV) and perceptual asymmetries in young and older martial arts athletes (judo and karate athletes) and compared their performance with that of young and older nonathletes. Stimuli were dots presented at three different eccentricities along the horizontal, oblique, and vertical diameters and three interstimulus intervals. Experiment 1 showed that although the two athlete groups were faster in almost all conditions, karate athletes performed significantly better than nonathlete participants when stimuli were presented in the peripheral visual field. Experiment 2 showed that older participants who had practiced a martial art at a competitive level when they were young were significantly faster than sedentary older adults of the same age. The practiced sport (judo or karate) did not affect performance differentially, suggesting that it is the practice of martial arts that is the crucial factor, rather than the type of martial art. Importantly, older athletes lose their PV advantage, as compared with young athletes. Finally, we found that physical activity (young and older athletes) and age (young and older adults) did not alter the visual asymmetries that vary as a function of spatial location; all participants were faster for stimuli presented along the horizontal than for those presented along the vertical meridian and for those presented at the lower rather than at the upper locations within the vertical meridian. These results indicate that the practice of these martial arts is an effective way of counteracting the processing speed decline of visual stimuli appearing at any visual location and speed.

  17. Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.

    PubMed

    Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei

    2016-05-01

    Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.

  18. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  19. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    NASA Astrophysics Data System (ADS)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  20. Cortical oscillations related to processing congruent and incongruent grapheme-phoneme pairs.

    PubMed

    Herdman, Anthony T; Fujioka, Takako; Chau, Wilkin; Ross, Bernhard; Pantev, Christo; Picton, Terence W

    2006-05-15

    In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.

  1. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Neuroimaging investigations of dorsal stream processing and effects of stimulus synchrony in schizophrenia.

    PubMed

    Sanfratello, Lori; Aine, Cheryl; Stephen, Julia

    2018-05-25

    Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  5. Dissociating Verbal and Nonverbal Audiovisual Object Processing

    ERIC Educational Resources Information Center

    Hocking, Julia; Price, Cathy J.

    2009-01-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same…

  6. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  7. Honeybees in a virtual reality environment learn unique combinations of colour and shape.

    PubMed

    Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A

    2017-10-01

    Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.

  8. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1976-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  9. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    PubMed

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  11. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  12. Fusion Prevents the Redundant Signals Effect: Evidence from Stereoscopically Presented Stimuli

    ERIC Educational Resources Information Center

    Schroter, Hannes; Fiedler, Anja; Miller, Jeff; Ulrich, Rolf

    2011-01-01

    In a simple reaction time (RT) experiment, visual stimuli were stereoscopically presented either to one eye (single stimulation) or to both eyes (redundant stimulation), with brightness matched for single and redundant stimulations. Redundant stimulation resulted in two separate percepts when noncorresponding retinal areas were stimulated, whereas…

  13. Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex

    PubMed Central

    Singer, Wolf; Maass, Wolfgang

    2009-01-01

    It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205

  14. Attentional load and sensory competition in human vision: modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field.

    PubMed

    Schwartz, Sophie; Vuilleumier, Patrik; Hutton, Chloe; Maravita, Angelo; Dolan, Raymond J; Driver, Jon

    2005-06-01

    Perceptual suppression of distractors may depend on both endogenous and exogenous factors, such as attentional load of the current task and sensory competition among simultaneous stimuli, respectively. We used functional magnetic resonance imaging (fMRI) to compare these two types of attentional effects and examine how they may interact in the human brain. We varied the attentional load of a visual monitoring task performed on a rapid stream at central fixation without altering the central stimuli themselves, while measuring the impact on fMRI responses to task-irrelevant peripheral checkerboards presented either unilaterally or bilaterally. Activations in visual cortex for irrelevant peripheral stimulation decreased with increasing attentional load at fixation. This relative decrease was present even in V1, but became larger for successive visual areas through to V4. Decreases in activation for contralateral peripheral checkerboards due to higher central load were more pronounced within retinotopic cortex corresponding to 'inner' peripheral locations relatively near the central targets than for more eccentric 'outer' locations, demonstrating a predominant suppression of nearby surround rather than strict 'tunnel vision' during higher task load at central fixation. Contralateral activations for peripheral stimulation in one hemifield were reduced by competition with concurrent stimulation in the other hemifield only in inferior parietal cortex, not in retinotopic areas of occipital visual cortex. In addition, central attentional load interacted with competition due to bilateral versus unilateral peripheral stimuli specifically in posterior parietal and fusiform regions. These results reveal that task-dependent attentional load, and interhemifield stimulus-competition, can produce distinct influences on the neural responses to peripheral visual stimuli within the human visual system. These distinct mechanisms in selective visual processing may be integrated within posterior parietal areas, rather than earlier occipital cortex.

  15. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1973-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  16. Examining the short term effects of emotion under an Adaptation Level Theory model of tinnitus perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2017-03-01

    Existing evidence suggests a strong relationship between tinnitus and emotion. The objective of this study was to examine the effects of short-term emotional changes along valence and arousal dimensions on tinnitus outcomes. Emotional stimuli were presented in two different modalities: auditory and visual. The authors hypothesized that (1) negative valence (unpleasant) stimuli and/or high arousal stimuli will lead to greater tinnitus loudness and annoyance than positive valence and/or low arousal stimuli, and (2) auditory emotional stimuli, which are in the same modality as the tinnitus, will exhibit a greater effect on tinnitus outcome measures than visual stimuli. Auditory and visual emotive stimuli were administered to 22 participants (12 females and 10 males) with chronic tinnitus, recruited via email invitations send out to the University of Auckland Tinnitus Research Volunteer Database. Emotional stimuli used were taken from the International Affective Digital Sounds- Version 2 (IADS-2) and the International Affective Picture System (IAPS) (Bradley and Lang, 2007a, 2007b). The Emotion Regulation Questionnaire (Gross and John, 2003) was administered alongside subjective ratings of tinnitus loudness and annoyance, and psychoacoustic sensation level matches to external sounds. Males had significantly different emotional regulation scores than females. Negative valence emotional auditory stimuli led to higher tinnitus loudness ratings in males and females and higher annoyance ratings in males only; loudness matches of tinnitus remained unchanged. The visual stimuli did not have an effect on tinnitus ratings. The results are discussed relative to the Adaptation Level Theory Model of Tinnitus. The results indicate that the negative valence dimension of emotion is associated with increased tinnitus magnitude judgements and gender effects may also be present, but only when the emotional stimulus is in the auditory modality. Sounds with emotional associations may be used for sound therapy for tinnitus relief; it is of interest to determine whether the emotional component of sound treatments can play a role in reversing the negative responses discussed in this paper. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Development of visual cortical function in infant macaques: A BOLD fMRI study

    PubMed Central

    Meeson, Alan; Munk, Matthias H. J.; Kourtzi, Zoe; Movshon, J. Anthony; Logothetis, Nikos K.; Kiorpes, Lynne

    2017-01-01

    Functional brain development is not well understood. In the visual system, neurophysiological studies in nonhuman primates show quite mature neuronal properties near birth although visual function is itself quite immature and continues to develop over many months or years after birth. Our goal was to assess the relative development of two main visual processing streams, dorsal and ventral, using BOLD fMRI in an attempt to understand the global mechanisms that support the maturation of visual behavior. Seven infant macaque monkeys (Macaca mulatta) were repeatedly scanned, while anesthetized, over an age range of 102 to 1431 days. Large rotating checkerboard stimuli induced BOLD activation in visual cortices at early ages. Additionally we used static and dynamic Glass pattern stimuli to probe BOLD responses in primary visual cortex and two extrastriate areas: V4 and MT-V5. The resulting activations were analyzed with standard GLM and multivoxel pattern analysis (MVPA) approaches. We analyzed three contrasts: Glass pattern present/absent, static/dynamic Glass pattern presentation, and structured/random Glass pattern form. For both GLM and MVPA approaches, robust coherent BOLD activation appeared relatively late in comparison to the maturation of known neuronal properties and the development of behavioral sensitivity to Glass patterns. Robust differential activity to Glass pattern present/absent and dynamic/static stimulus presentation appeared first in V1, followed by V4 and MT-V5 at older ages; there was no reliable distinction between the two extrastriate areas. A similar pattern of results was obtained with the two analysis methods, although MVPA analysis showed reliable differential responses emerging at later ages than GLM. Although BOLD responses to large visual stimuli are detectable, our results with more refined stimuli indicate that global BOLD activity changes as behavioral performance matures. This reflects an hierarchical development of the visual pathways. Since fMRI BOLD reflects neural activity on a population level, our results indicate that, although individual neurons might be adult-like, a longer maturation process takes place on a population level. PMID:29145469

  18. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631

  19. Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT

    PubMed Central

    Xiao, Jianbo

    2015-01-01

    Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869

  20. TypingSuite: Integrated Software for Presenting Stimuli, and Collecting and Analyzing Typing Data

    ERIC Educational Resources Information Center

    Mazerolle, Erin L.; Marchand, Yannick

    2015-01-01

    Research into typing patterns has broad applications in both psycholinguistics and biometrics (i.e., improving security of computer access via each user's unique typing patterns). We present a new software package, TypingSuite, which can be used for presenting visual and auditory stimuli, collecting typing data, and summarizing and analyzing the…

  1. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  2. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  3. Babies know bad dancing when they see it: Older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays.

    PubMed

    Hannon, Erin E; Schachner, Adena; Nave-Blodgett, Jessica E

    2017-07-01

    Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Is Trypophobia a Phobia?

    PubMed

    Can, Wang; Zhuoran, Zhao; Zheng, Jin

    2017-04-01

    In the past 10 years, thousands of people have claimed to be affected by trypophobia, which is the fear of objects with small holes. Recent research suggests that people do not fear the holes; rather, images of clustered holes, which share basic visual characteristics with venomous organisms, lead to nonconscious fear. In the present study, both self-reported measures and the Preschool Single Category Implicit Association Test were adapted for use with preschoolers to investigate whether discomfort related to trypophobic stimuli was grounded in their visual features or based on a nonconsciously associated fear of venomous animals. The results indicated that trypophobic stimuli were associated with discomfort in children. This discomfort seemed to be related to the typical visual characteristics and pattern properties of trypophobic stimuli rather than to nonconscious associations with venomous animals. The association between trypophobic stimuli and venomous animals vanished when the typical visual characteristics of trypophobic features were removed from colored photos of venomous animals. Thus, the discomfort felt toward trypophobic images might be an instinctive response to their visual characteristics rather than the result of a learned but nonconscious association with venomous animals. Therefore, it is questionable whether it is justified to legitimize trypophobia.

  5. Shape and color conjunction stimuli are represented as bound objects in visual working memory.

    PubMed

    Luria, Roy; Vogel, Edward K

    2011-05-01

    The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Cell-assembly coding in several memory processes.

    PubMed

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  7. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    PubMed

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  9. Multisensory integration across the senses in young and old adults

    PubMed Central

    Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee

    2011-01-01

    Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545

  10. Pitting temporal against spatial integration in schizophrenic patients.

    PubMed

    Herzog, Michael H; Brand, Andreas

    2009-06-30

    Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.

  11. Effects of set-size and lateral masking in visual search.

    PubMed

    Põder, Endel

    2004-01-01

    In the present research, the roles of lateral masking and central processing limitations in visual search were studied. Two search conditions were used: (1) target differed from distractors by presence/absence of a simple feature; (2) target differed by relative position of the same components only. The number of displayed stimuli (set-size) and the distance between neighbouring stimuli were varied as independently as possible in order to measure the effect of both. The effect of distance between stimuli (lateral masking) was found to be similar in both conditions. The effect of set-size was much larger for relative position stimuli. The results support the view that perception of relative position of stimulus components is limited mainly by the capacity of central processing.

  12. Beyond arousal and valence: the importance of the biological versus social relevance of emotional stimuli.

    PubMed

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-03-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.

  13. Beyond arousal and valence: The importance of the biological versus social relevance of emotional stimuli

    PubMed Central

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-01-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention; memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that: a) biologically emotional images hold attention more strongly than socially emotional images, b) memory for biologically emotional images was enhanced even with limited cognitive resources, but c) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images’ subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in visual cortex and greater functional connectivity between amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between amygdala and MPFC than biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity. PMID:21964552

  14. Primary visual response (M100) delays in adolescents with FASD as measured with MEG.

    PubMed

    Coffman, Brian A; Kodituwakku, Piyadasa; Kodituwakku, Elizabeth L; Romero, Lucinda; Sharadamma, Nirupama Muniswamy; Stone, David; Stephen, Julia M

    2013-11-01

    Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. Copyright © 2012 Wiley Periodicals, Inc.

  15. Visual Phonetic Processing Localized Using Speech and Non-Speech Face Gestures in Video and Point-Light Displays

    PubMed Central

    Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand

    2011-01-01

    The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377

  16. Visual Categorization of Natural Movies by Rats

    PubMed Central

    Vinken, Kasper; Vermaercke, Ben

    2014-01-01

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598

  17. Selective attention modulates visual and haptic repetition priming: effects in aging and Alzheimer's disease.

    PubMed

    Ballesteros, Soledad; Reales, José M; Mayas, Julia; Heller, Morton A

    2008-08-01

    In two experiments, we examined the effect of selective attention at encoding on repetition priming in normal aging and Alzheimer's disease (AD) patients for objects presented visually (experiment 1) or haptically (experiment 2). We used a repetition priming paradigm combined with a selective attention procedure at encoding. Reliable priming was found for both young adults and healthy older participants for visually presented pictures (experiment 1) as well as for haptically presented objects (experiment 2). However, this was only found for attended and not for unattended stimuli. The results suggest that independently of the perceptual modality, repetition priming requires attention at encoding and that perceptual facilitation is maintained in normal aging. However, AD patients did not show priming for attended stimuli, or for unattended visual or haptic objects. These findings suggest an early deficit of selective attention in AD. Results are discussed from a cognitive neuroscience approach.

  18. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  19. An Ecological Alternative to Snodgrass & Vanderwart: 360 High Quality Colour Images with Norms for Seven Psycholinguistic Variables

    PubMed Central

    Moreno-Martínez, Francisco Javier; Montoro, Pedro R.

    2012-01-01

    This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object-and word- processing, both in neurological patients and in healthy controls. PMID:22662166

  20. Color categories affect pre-attentive color perception.

    PubMed

    Clifford, Alexandra; Holmes, Amanda; Davies, Ian R L; Franklin, Anna

    2010-10-01

    Categorical perception (CP) of color is the faster and/or more accurate discrimination of colors from different categories than equivalently spaced colors from the same category. Here, we investigate whether color CP at early stages of chromatic processing is independent of top-down modulation from attention. A visual oddball task was employed where frequent and infrequent colored stimuli were either same- or different-category, with chromatic differences equated across conditions. Stimuli were presented peripheral to a central distractor task to elicit an event-related potential (ERP) known as the visual mismatch negativity (vMMN). The vMMN is an index of automatic and pre-attentive visual change detection arising from generating loci in visual cortices. The results revealed a greater vMMN for different-category than same-category change detection when stimuli appeared in the lower visual field, and an absence of attention-related ERP components. The findings provide the first clear evidence for an automatic and pre-attentive categorical code for color. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. The effect of viewing speech on auditory speech processing is different in the left and right hemispheres.

    PubMed

    Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko

    2008-11-25

    We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.

  2. A Bilateral Advantage for Storage in Visual Working Memory

    ERIC Educational Resources Information Center

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  3. The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.

    2008-01-01

    Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150

  4. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    PubMed

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. An ERP study on whether semantic integration exists in processing ecologically unrelated audio-visual information.

    PubMed

    Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning

    2011-11-14

    In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Assessment of Grating Acuity in Infants and Toddlers Using an Electronic Acuity Card: The Dobson Card.

    PubMed

    Mohan, Kathleen M; Miller, Joseph M; Harvey, Erin M; Gerhart, Kimberly D; Apple, Howard P; Apple, Deborah; Smith, Jordana M; Davis, Amy L; Leonard-Green, Tina; Campus, Irene; Dennis, Leslie K

    2016-01-01

    To determine if testing binocular visual acuity in infants and toddlers using the Acuity Card Procedure (ACP) with electronic grating stimuli yields clinically useful data. Participants were infants and toddlers ages 5 to 36.7 months referred by pediatricians due to failed automated vision screening. The ACP was used to test binocular grating acuity. Stimuli were presented on the Dobson Card. The Dobson Card consists of a handheld matte-black plexiglass frame with two flush-mounted tablet computers and is similar in size and form to commercially available printed grating acuity testing stimuli (Teller Acuity Cards II [TACII]; Stereo Optical, Inc., Chicago, IL). On each trial, one tablet displayed a square-wave grating and the other displayed a luminance-matched uniform gray patch. Stimuli were roughly equivalent to the stimuli available in the printed TACII stimuli. After acuity testing, each child received a cycloplegic eye examination. Based on cycloplegic retinoscopy, patients were categorized as having high or low refractive error per American Association for Pediatric Ophthalmology and Strabismus vision screening referral criteria. Mean acuities for high and low refractive error groups were compared using analysis of covariance, controlling for age. Mean visual acuity was significantly poorer in children with high refractive error than in those with low refractive error (P = .015). Electronic stimuli presented using the ACP can yield clinically useful measurements of grating acuity in infants and toddlers. Further research is needed to determine the optimal conditions and procedures for obtaining accurate and clinically useful automated measurements of visual acuity in infants and toddlers. Copyright 2016, SLACK Incorporated.

  7. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    PubMed

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  8. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    PubMed

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Contingent capture of involuntary visual spatial attention does not differ between normally hearing children and proficient cochlear implant users.

    PubMed

    Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill

    2014-01-01

    Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.

  10. Bayesian-based integration of multisensory naturalistic perithreshold stimuli.

    PubMed

    Regenbogen, Christina; Johansson, Emilia; Andersson, Patrik; Olsson, Mats J; Lundström, Johan N

    2016-07-29

    Most studies exploring multisensory integration have used clearly perceivable stimuli. According to the principle of inverse effectiveness, the added neural and behavioral benefit of integrating clear stimuli is reduced in comparison to stimuli with degraded and less salient unisensory information. Traditionally, speed and accuracy measures have been analyzed separately with few studies merging these to gain an understanding of speed-accuracy trade-offs in multisensory integration. In two separate experiments, we assessed multisensory integration of naturalistic audio-visual objects consisting of individually-tailored perithreshold dynamic visual and auditory stimuli, presented within a multiple-choice task, using a Bayesian Hierarchical Drift Diffusion Model that combines response time and accuracy. For both experiments, unisensory stimuli were degraded to reach a 75% identification accuracy level for all individuals and stimuli to promote multisensory binding. In Experiment 1, we subsequently presented uni- and their respective bimodal stimuli followed by a 5-alternative-forced-choice task. In Experiment 2, we controlled for low-level integration and attentional differences. Both experiments demonstrated significant superadditive multisensory integration of bimodal perithreshold dynamic information. We present evidence that the use of degraded sensory stimuli may provide a link between previous findings of inverse effectiveness on a single neuron level and overt behavior. We further suggest that a combined measure of accuracy and reaction time may be a more valid and holistic approach of studying multisensory integration and propose the application of drift diffusion models for studying behavioral correlates as well as brain-behavior relationships of multisensory integration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Enduring critical period plasticity visualized by transcranial flavoprotein imaging in mouse primary visual cortex.

    PubMed

    Tohmi, Manavu; Kitaura, Hiroki; Komagata, Seiji; Kudoh, Masaharu; Shibuki, Katsuei

    2006-11-08

    Experience-dependent plasticity in the visual cortex was investigated using transcranial flavoprotein fluorescence imaging in mice anesthetized with urethane. On- and off-responses in the primary visual cortex were elicited by visual stimuli. Fluorescence responses and field potentials elicited by grating patterns decreased similarly as contrasts of visual stimuli were reduced. Fluorescence responses also decreased as spatial frequency of grating stimuli increased. Compared with intrinsic signal imaging in the same mice, fluorescence imaging showed faster responses with approximately 10 times larger signal changes. Retinotopic maps in the primary visual cortex and area LM were constructed using fluorescence imaging. After monocular deprivation (MD) of 4 d starting from postnatal day 28 (P28), deprived eye responses were suppressed compared with nondeprived eye responses in the binocular zone but not in the monocular zone. Imaging faithfully recapitulated a critical period for plasticity with maximal effects of MD observed around P28 and not in adulthood even under urethane anesthesia. Visual responses were compared before and after MD in the same mice, in which the skull was covered with clear acrylic dental resin. Deprived eye responses decreased after MD, whereas nondeprived eye responses increased. Effects of MD during a critical period were tested 2 weeks after reopening of the deprived eye. Significant ocular dominance plasticity was observed in responses elicited by moving grating patterns, but no long-lasting effect was found in visual responses elicited by light-emitting diode light stimuli. The present results indicate that transcranial flavoprotein fluorescence imaging is a powerful tool for investigating experience-dependent plasticity in the mouse visual cortex.

  12. Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.

    PubMed

    Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S

    2013-05-01

    Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Threat as a feature in visual semantic object memory.

    PubMed

    Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John

    2013-08-01

    Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.

  14. Multisensory Motion Perception in 3–4 Month-Old Infants

    PubMed Central

    Nava, Elena; Grassi, Massimo; Brenna, Viola; Croci, Emanuela; Turati, Chiara

    2017-01-01

    Human infants begin very early in life to take advantage of multisensory information by extracting the invariant amodal information that is conveyed redundantly by multiple senses. Here we addressed the question as to whether infants can bind multisensory moving stimuli, and whether this occurs even if the motion produced by the stimuli is only illusory. Three- to 4-month-old infants were presented with two bimodal pairings: visuo-tactile and audio-visual. Visuo-tactile pairings consisted of apparently vertically moving bars (the Barber Pole illusion) moving in either the same or opposite direction with a concurrent tactile stimulus consisting of strokes given on the infant’s back. Audio-visual pairings consisted of the Barber Pole illusion in its visual and auditory version, the latter giving the impression of a continuous rising or ascending pitch. We found that infants were able to discriminate congruently (same direction) vs. incongruently moving (opposite direction) pairs irrespective of modality (Experiment 1). Importantly, we also found that congruently moving visuo-tactile and audio-visual stimuli were preferred over incongruently moving bimodal stimuli (Experiment 2). Our findings suggest that very young infants are able to extract motion as amodal component and use it to match stimuli that only apparently move in the same direction. PMID:29187829

  15. Neurochemical responses to chromatic and achromatic stimuli in the human visual cortex.

    PubMed

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; Eberly, Lynn E; Deelchand, Dinesh K; Barreto, Felipe R; Mangia, Silvia

    2018-02-01

    In the present study, we aimed at determining the metabolic responses of the human visual cortex during the presentation of chromatic and achromatic stimuli, known to preferentially activate two separate clusters of neuronal populations (called "blobs" and "interblobs") with distinct sensitivity to color or luminance features. Since blobs and interblobs have different cytochrome-oxidase (COX) content and micro-vascularization level (i.e., different capacities for glucose oxidation), different functional metabolic responses during chromatic vs. achromatic stimuli may be expected. The stimuli were optimized to evoke a similar load of neuronal activation as measured by the bold oxygenation level dependent (BOLD) contrast. Metabolic responses were assessed using functional 1 H MRS at 7 T in 12 subjects. During both chromatic and achromatic stimuli, we observed the typical increases in glutamate and lactate concentration, and decreases in aspartate and glucose concentration, that are indicative of increased glucose oxidation. However, within the detection sensitivity limits, we did not observe any difference between metabolic responses elicited by chromatic and achromatic stimuli. We conclude that the higher energy demands of activated blobs and interblobs are supported by similar increases in oxidative metabolism despite the different capacities of these neuronal populations.

  16. Timing the impact of literacy on visual processing

    PubMed Central

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  17. Timing the impact of literacy on visual processing.

    PubMed

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas

    2014-12-09

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.

  18. Auditory and visual capture during focused visual attention.

    PubMed

    Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan

    2009-10-01

    It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.

  19. Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Waytowich, Nicholas R.; Krusienski, Dean J.

    2015-06-01

    Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.

  20. Different Visual Preference Patterns in Response to Simple and Complex Dynamic Social Stimuli in Preschool-Aged Children with Autism Spectrum Disorders

    PubMed Central

    Shi, Lijuan; Zhou, Yuanyue; Ou, Jianjun; Gong, Jingbo; Wang, Suhong; Cui, Xilong; Lyu, Hailong; Zhao, Jingping; Luo, Xuerong

    2015-01-01

    Eye-tracking studies in young children with autism spectrum disorder (ASD) have shown a visual attention preference for geometric patterns when viewing paired dynamic social images (DSIs) and dynamic geometric images (DGIs). In the present study, eye-tracking of two different paired presentations of DSIs and DGIs was monitored in a group of 13 children aged 4 to 6 years with ASD and 20 chronologically age-matched typically developing children (TDC). The results indicated that compared with the control group, children with ASD attended significantly less to DSIs showing two or more children playing than to similar DSIs showing a single child. Visual attention preference in 4- to 6-year-old children with ASDs, therefore, appears to be modulated by the type of visual stimuli. PMID:25781170

  1. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Exploring the temporal dynamics of sustained and transient spatial attention using steady-state visual evoked potentials.

    PubMed

    Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte

    2017-05-01

    While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.

  3. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  4. The Efficacy of Virtual Reality in Treating Post-traumatic Stress Disorder in U.S. Warfighters Returning from Iraq and Afghanistan Combat Theaters

    DTIC Science & Technology

    2011-11-08

    kinesthetic VR stimuli with patient arousal responses. Treatment consisted of 10 sessions (2x/week) for 5 weeks, and a control group received structured...that provided the treatment therapist control over the visual, auditory, and kinesthetic elements experienced by the participant. The experimental...graded presentation of visual, auditory, and kinesthetic stimuli to stimulate memory recall of traumatic combat events in a safe

  5. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Masking reduces orientation selectivity in rat visual cortex

    PubMed Central

    Alwis, Dasuni S.; Richards, Katrina L.

    2016-01-01

    In visual masking the perception of a target stimulus is impaired by a preceding (forward) or succeeding (backward) mask stimulus. The illusion is of interest because it allows uncoupling of the physical stimulus, its neuronal representation, and its perception. To understand the neuronal correlates of masking, we examined how masks affected the neuronal responses to oriented target stimuli in the primary visual cortex (V1) of anesthetized rats (n = 37). Target stimuli were circular gratings with 12 orientations; mask stimuli were plaids created as a binarized sum of all possible target orientations. Spatially, masks were presented either overlapping or surrounding the target. Temporally, targets and masks were presented for 33 ms, but the stimulus onset asynchrony (SOA) of their relative appearance was varied. For the first time, we examine how spatially overlapping and center-surround masking affect orientation discriminability (rather than visibility) in V1. Regardless of the spatial or temporal arrangement of stimuli, the greatest reductions in firing rate and orientation selectivity occurred for the shortest SOAs. Interestingly, analyses conducted separately for transient and sustained target response components showed that changes in orientation selectivity do not always coincide with changes in firing rate. Given the near-instantaneous reductions observed in orientation selectivity even when target and mask do not spatially overlap, we suggest that monotonic visual masking is explained by a combination of neural integration and lateral inhibition. PMID:27535373

  7. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    PubMed

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  8. The nature-disorder paradox: A perceptual study on how nature is disorderly yet aesthetically preferred.

    PubMed

    Kotabe, Hiroki P; Kardan, Omid; Berman, Marc G

    2017-08-01

    Natural environments have powerful aesthetic appeal linked to their capacity for psychological restoration. In contrast, disorderly environments are aesthetically aversive, and have various detrimental psychological effects. But in our research, we have repeatedly found that natural environments are perceptually disorderly. What could explain this paradox? We present 3 competing hypotheses: the aesthetic preference for naturalness is more powerful than the aesthetic aversion to disorder (the nature-trumps-disorder hypothesis ); disorder is trivial to aesthetic preference in natural contexts (the harmless-disorder hypothesis ); and disorder is aesthetically preferred in natural contexts (the beneficial-disorder hypothesis ). Utilizing novel methods of perceptual study and diverse stimuli, we rule in the nature-trumps-disorder hypothesis and rule out the harmless-disorder and beneficial-disorder hypotheses. In examining perceptual mechanisms, we find evidence that high-level scene semantics are both necessary and sufficient for the nature-trumps-disorder effect. Necessity is evidenced by the effect disappearing in experiments utilizing only low-level visual stimuli (i.e., where scene semantics have been removed) and experiments utilizing a rapid-scene-presentation procedure that obscures scene semantics. Sufficiency is evidenced by the effect reappearing in experiments utilizing noun stimuli which remove low-level visual features. Furthermore, we present evidence that the interaction of scene semantics with low-level visual features amplifies the nature-trumps-disorder effect-the effect is weaker both when statistically adjusting for quantified low-level visual features and when using noun stimuli which remove low-level visual features. These results have implications for psychological theories bearing on the joint influence of low- and high-level perceptual inputs on affect and cognition, as well as for aesthetic design. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Unseen stimuli modulate conscious visual experience: evidence from inter-hemispheric summation.

    PubMed

    de Gelder, B; Pourtois, G; van Raamsdonk, M; Vroomen, J; Weiskrantz, L

    2001-02-12

    Emotional facial expression can be discriminated despite extensive lesions of striate cortex. Here we report differential performance with recognition of facial stimuli in the intact visual field depending on simultaneous presentation of congruent or incongruent stimuli in the blind field. Three experiments were based on inter-hemispheric summation. Redundant stimulation in the blind field led to shorter latencies for stimulus detection in the intact field. Recognition of the expression of a half-face expression in the intact field was faster when the other half of the face presented to the blind field had a congruent expression. Finally, responses to the expression of whole faces to the intact field were delayed for incongruent facial expressions presented in the blind field. These results indicate that the neuro-anatomical pathways (extra-striate cortical and sub-cortical) sustaining inter-hemispheric summation can operate in the absence of striate cortex.

  10. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    ERIC Educational Resources Information Center

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  11. Learned value and object perception: Accelerated perception or biased decisions?

    PubMed

    Rajsic, Jason; Perera, Harendri; Pratt, Jay

    2017-02-01

    Learned value is known to bias visual search toward valued stimuli. However, some uncertainty exists regarding the stage of visual processing that is modulated by learned value. Here, we directly tested the effect of learned value on preattentive processing using temporal order judgments. Across four experiments, we imbued some stimuli with high value and some with low value, using a nonmonetary reward task. In Experiment 1, we replicated the value-driven distraction effect, validating our nonmonetary reward task. Experiment 2 showed that high-value stimuli, but not low-value stimuli, exhibit a prior-entry effect. Experiment 3, which reversed the temporal order judgment task (i.e., reporting which stimulus came second), showed no prior-entry effect, indicating that although a response bias may be present for high-value stimuli, they are still reported as appearing earlier. However, Experiment 4, using a simultaneity judgment task, showed no shift in temporal perception. Overall, our results support the conclusion that learned value biases perceptual decisions about valued stimuli without speeding preattentive stimulus processing.

  12. Backward masked fearful faces enhance contralateral occipital cortical activity for visual targets within the spotlight of attention

    PubMed Central

    Reinke, Karen S.; LaMontagne, Pamela J.; Habib, Reza

    2011-01-01

    Spatial attention has been argued to be adaptive by enhancing the processing of visual stimuli within the ‘spotlight of attention’. We previously reported that crude threat cues (backward masked fearful faces) facilitate spatial attention through a network of brain regions consisting of the amygdala, anterior cingulate and contralateral visual cortex. However, results from previous functional magnetic resonance imaging (fMRI) dot-probe studies have been inconclusive regarding a fearful face-elicited contralateral modulation of visual targets. Here, we tested the hypothesis that the capture of spatial attention by crude threat cues would facilitate processing of subsequently presented visual stimuli within the masked fearful face-elicited ‘spotlight of attention’ in the contralateral visual cortex. Participants performed a backward masked fearful face dot-probe task while brain activity was measured with fMRI. Masked fearful face left visual field trials enhanced activity for spatially congruent targets in the right superior occipital gyrus, fusiform gyrus and lateral occipital complex, while masked fearful face right visual field trials enhanced activity in the left middle occipital gyrus. These data indicate that crude threat elicited spatial attention enhances the processing of subsequent visual stimuli in contralateral occipital cortex, which may occur by lowering neural activation thresholds in this retinotopic location. PMID:20702500

  13. Working memory deficits in boys with attention deficit/hyperactivity disorder (ADHD): An examination of orthographic coding and episodic buffer processes.

    PubMed

    Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E

    2015-01-01

    The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.

  14. Working memory enhances visual perception: evidence from signal detection analysis.

    PubMed

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W

    2010-03-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.

  15. Effects of ambient stimuli on measures of behavioral state and microswitch use in adults with profound multiple impairments.

    PubMed

    Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B

    2004-01-01

    The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.

  16. Neural Representations of Natural and Scrambled Movies Progressively Change from Rat Striate to Temporal Cortex

    PubMed Central

    Vinken, Kasper; Van den Bergh, Gert; Vermaercke, Ben; Op de Beeck, Hans P.

    2016-01-01

    In recent years, the rodent has come forward as a candidate model for investigating higher level visual abilities such as object vision. This view has been backed up substantially by evidence from behavioral studies that show rats can be trained to express visual object recognition and categorization capabilities. However, almost no studies have investigated the functional properties of rodent extrastriate visual cortex using stimuli that target object vision, leaving a gap compared with the primate literature. Therefore, we recorded single-neuron responses along a proposed ventral pathway in rat visual cortex to investigate hallmarks of primate neural object representations such as preference for intact versus scrambled stimuli and category-selectivity. We presented natural movies containing a rat or no rat as well as their phase-scrambled versions. Population analyses showed increased dissociation in representations of natural versus scrambled stimuli along the targeted stream, but without a clear preference for natural stimuli. Along the measured cortical hierarchy the neural response seemed to be driven increasingly by features that are not V1-like and destroyed by phase-scrambling. However, there was no evidence for category selectivity for the rat versus nonrat distinction. Together, these findings provide insights about differences and commonalities between rodent and primate visual cortex. PMID:27146315

  17. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  18. Visual-Spatial Orienting in Autism.

    ERIC Educational Resources Information Center

    Wainwright, J. Ann; Bryson, Susan E.

    1996-01-01

    Visual-spatial orienting in 10 high-functioning adults with autism was examined. Compared to controls, subjects responded faster to central than to lateral stimuli, and showed a left visual field advantage for stimulus detection only when laterally presented. Abnormalities in attention shifting and coordination of attentional and motor systems are…

  19. Visually evoked changes in the rat retinal blood flow measured with Doppler optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tan, Bingyao; Mason, Erik; MacLellan, Ben; Bizheva, Kostadinka

    2017-02-01

    Visually evoked changes of retinal blood flow can serve as an important research tool to investigate eye disease such as glaucoma and diabetic retinopathy. In this study we used a combined, research-grade, high-resolution Doppler OCT+ERG system to study changes in the retinal blood flow (RBF) and retinal neuronal activity in response to visual stimuli of different intensities, durations and type (flicker vs single flash). Specifically, we used white light stimuli of 10 ms and 200 ms single flash, 1s and 2s for flickers stimuli of 20% duty cycle. The study was conducted in-vivo in pigmented rats. Both single flash (SF) and flicker stimuli caused increase in the RBF. The 10 ms SF stimulus did not generate any consistent measurable response, while the 200 ms SF of the same intensity generated 4% change in the RBF peaking at 1.5 s after the stimulus onset. Single flash stimuli introduced 2x smaller change in RBF and 30% earlier RBF peak response compared to flicker stimuli of the same intensity and duration. Doubling the intensity of SF or flicker stimuli increased the RBF peak magnitude by 1.5x. Shortening the flicker stimulus duration by 2x increased the RBF recovery rate by 2x, however, had no effect on the rate of RBF change from baseline to peak.

  20. Visual categorization of natural movies by rats.

    PubMed

    Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P

    2014-08-06

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.

  1. Lower pitch is larger, yet falling pitches shrink.

    PubMed

    Eitan, Zohar; Schupak, Asi; Gotler, Alex; Marks, Lawrence E

    2014-01-01

    Experiments using diverse paradigms, including speeded discrimination, indicate that pitch and visually-perceived size interact perceptually, and that higher pitch is congruent with smaller size. While nearly all of these studies used static stimuli, here we examine the interaction of dynamic pitch and dynamic size, using Garner's speeded discrimination paradigm. Experiment 1 examined the interaction of continuous rise/fall in pitch and increase/decrease in object size. Experiment 2 examined the interaction of static pitch and size (steady high/low pitches and large/small visual objects), using an identical procedure. Results indicate that static and dynamic auditory and visual stimuli interact in opposite ways. While for static stimuli (Experiment 2), higher pitch is congruent with smaller size (as suggested by earlier work), for dynamic stimuli (Experiment 1), ascending pitch is congruent with growing size, and descending pitch with shrinking size. In addition, while static stimuli (Experiment 2) exhibit both congruence and Garner effects, dynamic stimuli (Experiment 1) present congruence effects without Garner interference, a pattern that is not consistent with prevalent interpretations of Garner's paradigm. Our interpretation of these results focuses on effects of within-trial changes on processing in dynamic tasks and on the association of changes in apparent size with implied changes in distance. Results suggest that static and dynamic stimuli can differ substantially in their cross-modal mappings, and may rely on different processing mechanisms.

  2. Bullets versus burgers: is it threat or relevance that captures attention?

    PubMed

    de Oca, Beatrice M; Black, Alison A

    2013-01-01

    Previous studies have found that potentially dangerous stimuli are better at capturing attention than neutral stimuli, a finding sometimes called the threat superiority effect. However, non-threatening stimuli also capture attention in many studies of visual attention. In Experiment 1, the relevance superiority effect was tested with a visual search task comparing detection times for threatening stimuli (guns), pleasant but motivationally relevant stimuli (food), and neutral stimuli (flowers and chairs). Gun targets were detected more rapidly than both types of neutral targets, whereas food targets were detected more quickly than the neutral chair targets only. Guns were detected more rapidly than food. In Experiment 2, threatening targets (guns and snakes), pleasant but motivationally relevant targets (money and food), and neutral targets (trees and couches) were all presented with the same neutral distractors (cactus and pots) in order to control for the valence of the distractor stimulus across the three categories of target stimuli. Threatening and pleasant target categories facilitated attention relative to neutral targets. The results support the view that both threatening and pleasant pictures can be detected more rapidly than neutral targets.

  3. Electrophysiological evidence for biased competition in V1 for fear expressions.

    PubMed

    West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay

    2011-11-01

    When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.

  4. Infants' Visual Localization of Visual and Auditory Targets.

    ERIC Educational Resources Information Center

    Bechtold, A. Gordon; And Others

    This study is an investigation of 2-month-old infants' abilities to visually localize visual and auditory peripheral stimuli. Each subject (N=40) was presented with 50 trials; 25 of these visual and 25 auditory. The infant was placed in a semi-upright infant seat positioned 122 cm from the center speaker of an arc formed by five loudspeakers. At…

  5. Specific excitatory connectivity for feature integration in mouse primary visual cortex

    PubMed Central

    Molina-Luna, Patricia; Roth, Morgane M.

    2017-01-01

    Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769

  6. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  7. Fear Processing in Dental Phobia during Crossmodal Symptom Provocation: An fMRI Study

    PubMed Central

    Maslowski, Nina Isabel; Wittchen, Hans-Ulrich; Lueken, Ulrike

    2014-01-01

    While previous studies successfully identified the core neural substrates of the animal subtype of specific phobia, only few and inconsistent research is available for dental phobia. These findings might partly relate to the fact that, typically, visual stimuli were employed. The current study aimed to investigate the influence of stimulus modality on neural fear processing in dental phobia. Thirteen dental phobics (DP) and thirteen healthy controls (HC) attended a block-design functional magnetic resonance imaging (fMRI) symptom provocation paradigm encompassing both visual and auditory stimuli. Drill sounds and matched neutral sinus tones served as auditory stimuli and dentist scenes and matched neutral videos as visual stimuli. Group comparisons showed increased activation in the insula, anterior cingulate cortex, orbitofrontal cortex, and thalamus in DP compared to HC during auditory but not visual stimulation. On the contrary, no differential autonomic reactions were observed in DP. Present results are largely comparable to brain areas identified in animal phobia, but also point towards a potential downregulation of autonomic outflow by neural fear circuits in this disorder. Findings enlarge our knowledge about neural correlates of dental phobia and may help to understand the neural underpinnings of the clinical and physiological characteristics of the disorder. PMID:24738049

  8. Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli.

    PubMed

    Scott, Ryan B; Samaha, Jason; Chrisley, Ron; Dienes, Zoltan

    2018-06-01

    While theories of consciousness differ substantially, the 'conscious access hypothesis', which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it. Copyright © 2018. Published by Elsevier B.V.

  9. Subliminal and supraliminal processing of reward-related stimuli in anorexia nervosa.

    PubMed

    Boehm, I; King, J A; Bernardoni, F; Geisler, D; Seidel, M; Ritschel, F; Goschke, T; Haynes, J-D; Roessner, V; Ehrlich, S

    2018-04-01

    Previous studies have highlighted the role of the brain reward and cognitive control systems in the etiology of anorexia nervosa (AN). In an attempt to disentangle the relative contribution of these systems to the disorder, we used functional magnetic resonance imaging (fMRI) to investigate hemodynamic responses to reward-related stimuli presented both subliminally and supraliminally in acutely underweight AN patients and age-matched healthy controls (HC). fMRI data were collected from a total of 35 AN patients and 35 HC, while they passively viewed subliminally and supraliminally presented streams of food, positive social, and neutral stimuli. Activation patterns of the group × stimulation condition × stimulus type interaction were interrogated to investigate potential group differences in processing different stimulus types under the two stimulation conditions. Moreover, changes in functional connectivity were investigated using generalized psychophysiological interaction analysis. AN patients showed a generally increased response to supraliminally presented stimuli in the inferior frontal junction (IFJ), but no alterations within the reward system. Increased activation during supraliminal stimulation with food stimuli was observed in the AN group in visual regions including superior occipital gyrus and the fusiform gyrus/parahippocampal gyrus. No group difference was found with respect to the subliminal stimulation condition and functional connectivity. Increased IFJ activation in AN during supraliminal stimulation may indicate hyperactive cognitive control, which resonates with clinical presentation of excessive self-control in AN patients. Increased activation to food stimuli in visual regions may be interpreted in light of an attentional food bias in AN.

  10. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    PubMed Central

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  11. Evaluation of Postural Control in Patients with Glaucoma Using a Virtual Reality Environment.

    PubMed

    Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A

    2015-06-01

    To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in patients with glaucoma. Cross-sectional study. The study involved 42 patients with glaucoma with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Torque moments around the center of foot pressure on the force platform were measured, and the standard deviations of the torque moments (STD) were calculated as a measurement of postural stability and reported in Newton meters (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Patients with glaucoma had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) and rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared with those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with a history of falls in patients with glaucoma (incidence rate ratio, 1.85; 95% confidence interval, 1.30-2.63; P = 0.001). The study presented and validated a novel paradigm for evaluation of balance control in patients with glaucoma on the basis of the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with a history of falls and may help to provide a better understanding of balance control in patients with glaucoma. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  12. Evaluation of Postural Control in Glaucoma Patients Using a Virtual 1 Reality Environment

    PubMed Central

    Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A.

    2015-01-01

    Purpose To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in glaucoma patients. Design Cross-sectional study. Participants The study involved 42 glaucoma patients with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Methods Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Main Outcome Measures Torque moments around the center of foot pressure on the force platform were measured and the standard deviations (STD) of these torque moments were calculated as a measurement of postural stability and reported in Newton meter (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Results Glaucoma patients had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) as well as rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared to those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with history of falls in glaucoma patients (incidence-rate ratio = 1.85; 95% CI: 1.30 – 2.63; P = 0.001). Conclusions The study presented and validated a novel paradigm for evaluation of balance control in glaucoma patients based on the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with history of falls and may help to provide a better understanding of balance control in glaucoma patients. PMID:25892017

  13. Visual-somatosensory integration in aging: Does stimulus location really matter?

    PubMed Central

    MAHONEY, JEANNETTE R.; WANG, CUILING; DUMAS, KRISTINA; HOLTZER, ROEE

    2014-01-01

    Individuals are constantly bombarded by sensory stimuli across multiple modalities that must be integrated efficiently. Multisensory integration (MSI) is said to be governed by stimulus properties including space, time, and magnitude. While there is a paucity of research detailing MSI in aging, we have demonstrated that older adults reveal the greatest reaction time (RT) benefi t when presented with simultaneous visual-somatosensory (VS) stimuli. To our knowledge, the differential RT benefit of visual and somatosensory stimuli presented within and across spatial hemifields has not been investigated in aging. Eighteen older adults (Mean = 74 years; 11 female), who were determined to be non-demented and without medical or psychiatric conditions that may affect their performance, participated in this study. Participants received eight randomly presented stimulus conditions (four unisensory and four multisensory) and were instructed to make speeded foot-pedal responses as soon as they detected any stimulation, regardless of stimulus type and location of unisensory inputs. Results from a linear mixed effect model, adjusted for speed of processing and other covariates, revealed that RTs to all multisensory pairings were significantly faster than those elicited to averaged constituent unisensory conditions (p < 0.01). Similarly, race model violation did not differ based on unisensory spatial location (p = 0.41). In summary, older adults demonstrate significant VS multisensory RT effects to stimuli both within and across spatial hemifields. PMID:24698637

  14. The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.

    PubMed

    van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R

    2018-05-04

    Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. Effects of auditory and visual modalities in recall of words.

    PubMed

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  16. The Effects of Visual Stimuli on the Spoken Narrative Performance of School-Age African American Children

    ERIC Educational Resources Information Center

    Mills, Monique T.

    2015-01-01

    Purpose: This study investigated the fictional narrative performance of school-age African American children across 3 elicitation contexts that differed in the type of visual stimulus presented. Method: A total of 54 children in Grades 2 through 5 produced narratives across 3 different visual conditions: no visual, picture sequence, and single…

  17. Automatic Guidance of Visual Attention from Verbal Working Memory

    ERIC Educational Resources Information Center

    Soto, David; Humphreys, Glyn W.

    2007-01-01

    Previous studies have shown that visual attention can be captured by stimuli matching the contents of working memory (WM). Here, the authors assessed the nature of the representation that mediates the guidance of visual attention from WM. Observers were presented with either verbal or visual primes (to hold in memory, Experiment 1; to verbalize,…

  18. Sustained Splits of Attention within versus across Visual Hemifields Produce Distinct Spatial Gain Profiles.

    PubMed

    Walter, Sabrina; Keitel, Christian; Müller, Matthias M

    2016-01-01

    Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.

  19. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. A bilateral advantage in controlling access to visual short-term memory.

    PubMed

    Holt, Jessica L; Delvenne, Jean-François

    2014-01-01

    Recent research on visual short-term memory (VSTM) has revealed the existence of a bilateral field advantage (BFA--i.e., better memory when the items are distributed in the two visual fields than if they are presented in the same hemifield) for spatial location and bar orientation, but not for color (Delvenne, 2005; Umemoto, Drew, Ester, & Awh, 2010). Here, we investigated whether a BFA in VSTM is constrained by attentional selective processes. It has indeed been previously suggested that the BFA may be a general feature of selective attention (Alvarez & Cavanagh, 2005; Delvenne, 2005). Therefore, the present study examined whether VSTM for color benefits from bilateral presentation if attentional selective processes are particularly engaged. Participants completed a color change detection task whereby target stimuli were presented either across both hemifields or within one single hemifield. In order to engage attentional selective processes, some trials contained irrelevant stimuli that needed to be ignored. Targets were selected based on spatial locations (Experiment 1) or on a salient feature (Experiment 2). In both cases, the results revealed a BFA only when irrelevant stimuli were presented among the targets. Overall, the findings strongly suggest that attentional selective processes at encoding can constrain whether a BFA is observed in VSTM.

  2. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses

    PubMed Central

    Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli

    2015-01-01

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858

  3. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Linking state regulation, brain laterality, and self-reported attention-deficit/hyperactivity disorder (ADHD) symptoms in adults.

    PubMed

    Mohamed, Saleh M H; Börger, Norbert A; Geuze, Reint H; van der Meere, Jaap J

    2016-10-01

    Many clinical studies have shown that performance of subjects with attention-deficit/hyperactivity disorder (ADHD) is impaired when stimuli are presented at a slow rate compared to a medium or fast rate. According to the cognitive-energetic model, this finding may reflect difficulty in allocating sufficient effort to regulate the motor activation state. Other studies have shown that the left hemisphere is relatively responsible for keeping humans motivated, allocating sufficient effort to complete their tasks. This leads to a prediction that poor effort allocation might be associated with an affected left-hemisphere functioning in ADHD. So far, this prediction has not been directly tested, which is the aim of the present study. Seventy-seven adults with various scores on the Conners' Adult ADHD Rating Scale performed a lateralized lexical decision task in three conditions with stimuli presented in a fast, a medium, and a slow rate. The left-hemisphere functioning was measured in terms of visual field advantage (better performance for the right than for the left visual field). All subjects showed an increased right visual field advantage for word processing in the slow presentation rate of stimuli compared to the fast and the medium rate. Higher ADHD scores were related to a reduced right visual field advantage in the slow rate only. The present findings suggest that ADHD symptomatology is associated with less involvement of the left hemisphere when extra effort allocation is needed to optimize the low motor activation state.

  5. Brain correlates of automatic visual change detection.

    PubMed

    Cléry, H; Andersson, F; Fonlupt, P; Gomot, M

    2013-07-15

    A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Processing of form stimuli presented unilaterally in humans, chimpanzees (Pan troglodytes), and monkeys (Macaca mulatta)

    NASA Technical Reports Server (NTRS)

    Hopkins, William D.; Washburn, David A.; Rumbaugh, Duane M.

    1990-01-01

    Visual forms were unilaterally presented using a video-task paradigm to ten humans, chimpanzees, and two rhesus monkeys to determine whether hemispheric advantages existed in the processing of these stimuli. Both accuracy and reaction time served as dependent measures. For the chimpanzees, a significant right hemisphere advantage was found within the first three test sessions. The humans and monkeys failed to show a hemispheric advantage as determined by accuracy scores. Analysis of reaction time data revealed a significant left hemisphere advantage for the monkeys. A visual half-field x block interaction was found for the chimpanzees, with a significant left visual field advantage in block two, whereas a right visual field advantage was found in block four. In the human subjects, a left visual field advantage was found in block three when they used their right hands to respond. The results are discussed in relation to recent reports of hemispheric advantages for nonhuman primates.

  7. Object-based attentional selection modulates anticipatory alpha oscillations

    PubMed Central

    Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán

    2015-01-01

    Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554

  8. 3D graphics, virtual reality, and motion-onset visual evoked potentials in neurogaming.

    PubMed

    Beveridge, R; Wilson, S; Coyle, D

    2016-01-01

    A brain-computer interface (BCI) offers movement-free control of a computer application and is achieved by reading and translating the cortical activity of the brain into semantic control signals. Motion-onset visual evoked potentials (mVEP) are neural potentials employed in BCIs and occur when motion-related stimuli are attended visually. mVEP dynamics are correlated with the position and timing of the moving stimuli. To investigate the feasibility of utilizing the mVEP paradigm with video games of various graphical complexities including those of commercial quality, we conducted three studies over four separate sessions comparing the performance of classifying five mVEP responses with variations in graphical complexity and style, in-game distractions, and display parameters surrounding mVEP stimuli. To investigate the feasibility of utilizing contemporary presentation modalities in neurogaming, one of the studies compared mVEP classification performance when stimuli were presented using the oculus rift virtual reality headset. Results from 31 independent subjects were analyzed offline. The results show classification performances ranging up to 90% with variations in conditions in graphical complexity having limited effect on mVEP performance; thus, demonstrating the feasibility of using the mVEP paradigm within BCI-based neurogaming. © 2016 Elsevier B.V. All rights reserved.

  9. Multimodal cues drive host-plant assessment in Asian citrus psyllid (Diaphorina citri)

    USDA-ARS?s Scientific Manuscript database

    Asian citrus psyllid (Diaphorina citri) transmits the causal agent of Huanglongbing, a devastating disease of citrus trees. In this study, we measured behavioral responses of D. citri to combinations of visual, olfactory, and gustatory stimuli in test arenas. Stimuli were presented to the psyllids ...

  10. Brain-computer interface on the basis of EEG system Encephalan

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir; Badarin, Artem; Nedaivozov, Vladimir; Kirsanov, Daniil; Hramov, Alexander

    2018-04-01

    We have proposed brain-computer interface (BCI) for the estimation of the brain response on the presented visual tasks. Proposed BCI is based on the EEG recorder Encephalan-EEGR-19/26 (Medicom MTD, Russia) supplemented by a special home-made developed acquisition software. BCI is tested during experimental session while subject is perceiving the bistable visual stimuli and classifying them according to the interpretation. We have subjected the participant to the different external conditions and observed the significant decrease in the response, associated with the perceiving the bistable visual stimuli, during the presence of distraction. Based on the obtained results we have proposed possibility to use of BCI for estimation of the human alertness during solving the tasks required substantial visual attention.

  11. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  12. Visual laterality in belugas (Delphinapterus leucas) and Pacific white-sided dolphins (Lagenorhynchus obliquidens) when viewing familiar and unfamiliar humans.

    PubMed

    Yeater, Deirdre B; Hill, Heather M; Baus, Natalie; Farnell, Heather; Kuczaj, Stan A

    2014-11-01

    Lateralization of cognitive processes and motor functions has been demonstrated in a number of species, including humans, elephants, and cetaceans. For example, bottlenose dolphins (Tursiops truncatus) have exhibited preferential eye use during a variety of cognitive tasks. The present study investigated the possibility of visual lateralization in 12 belugas (Delphinapterus leucas) and six Pacific white-sided dolphins (Lagenorhynchus obliquidens) located at two separate marine mammal facilities. During free swim periods, the belugas and Pacific white-sided dolphins were presented a familiar human, an unfamiliar human, or no human during 10-15 min sessions. Session videos were coded for gaze duration, eye presentation at approach, and eye preference while viewing each stimulus. Although we did not find any clear group level lateralization, we found individual left eye lateralized preferences related to social stimuli for most belugas and some Pacific white-sided dolphins. Differences in gaze durations were also observed. The majority of individual belugas had longer gaze durations for unfamiliar rather than familiar stimuli. These results suggest that lateralization occurs during visual processing of human stimuli in belugas and Pacific white-sided dolphins and that these species can distinguish between familiar and unfamiliar humans.

  13. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Enhanced Visual Cortical Activation for Emotional Stimuli is Preserved in Patients with Unilateral Amygdala Resection

    PubMed Central

    Edmiston, E. Kale; McHugo, Maureen; Dukic, Mildred S.; Smith, Stephen D.; Abou-Khalil, Bassel; Eggers, Erica

    2013-01-01

    Emotionally arousing pictures induce increased activation of visual pathways relative to emotionally neutral images. A predominant model for the preferential processing and attention to emotional stimuli posits that the amygdala modulates sensory pathways through its projections to visual cortices. However, recent behavioral studies have found intact perceptual facilitation of emotional stimuli in individuals with amygdala damage. To determine the importance of the amygdala to modulations in visual processing, we used functional magnetic resonance imaging to examine visual cortical blood oxygenation level-dependent (BOLD) signal in response to emotionally salient and neutral images in a sample of human patients with unilateral medial temporal lobe resection that included the amygdala. Adults with right (n = 13) or left (n = 5) medial temporal lobe resections were compared with demographically matched healthy control participants (n = 16). In the control participants, both aversive and erotic images produced robust BOLD signal increases in bilateral primary and secondary visual cortices relative to neutral images. Similarly, all patients with amygdala resections showed enhanced visual cortical activations to erotic images both ipsilateral and contralateral to the lesion site. All but one of the amygdala resection patients showed similar enhancements to aversive stimuli and there were no significant group differences in visual cortex BOLD responses in patients compared with controls for either aversive or erotic images. Our results indicate that neither the right nor left amygdala is necessary for the heightened visual cortex BOLD responses observed during emotional stimulus presentation. These data challenge an amygdalo-centric model of emotional modulation and suggest that non-amygdalar processes contribute to the emotional modulation of sensory pathways. PMID:23825407

  15. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  16. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  17. Stimulus Load and Oscillatory Activity in Higher Cortex

    PubMed Central

    Kornblith, Simon; Buschman, Timothy J.; Miller, Earl K.

    2016-01-01

    Exploring and exploiting a rich visual environment requires perceiving, attending, and remembering multiple objects simultaneously. Recent studies have suggested that this mental “juggling” of multiple objects may depend on oscillatory neural dynamics. We recorded local field potentials from the lateral intraparietal area, frontal eye fields, and lateral prefrontal cortex while monkeys maintained variable numbers of visual stimuli in working memory. Behavior suggested independent processing of stimuli in each hemifield. During stimulus presentation, higher-frequency power (50–100 Hz) increased with the number of stimuli (load) in the contralateral hemifield, whereas lower-frequency power (8–50 Hz) decreased with the total number of stimuli in both hemifields. During the memory delay, lower-frequency power increased with contralateral load. Load effects on higher frequencies during stimulus encoding and lower frequencies during the memory delay were stronger when neural activity also signaled the location of the stimuli. Like power, higher-frequency synchrony increased with load, but beta synchrony (16–30 Hz) showed the opposite effect, increasing when power decreased (stimulus presentation) and decreasing when power increased (memory delay). Our results suggest roles for lower-frequency oscillations in top-down processing and higher-frequency oscillations in bottom-up processing. PMID:26286916

  18. Caffeine Improves Left Hemisphere Processing of Positive Words

    PubMed Central

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893

  19. Information processing of visually presented picture and word stimuli by young hearing-impaired and normal-hearing children.

    PubMed

    Kelly, R R; Tomlison-Keasey, C

    1976-12-01

    Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

  20. Object activation in semantic memory from visual multimodal feature input.

    PubMed

    Kraut, Michael A; Kremen, Sarah; Moo, Lauren R; Segal, Jessica B; Calhoun, Vincent; Hart, John

    2002-01-01

    The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture-word). A previous study using word-word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture-word pairs that are features of objects, with the task being to decide if together they "activated" an object not explicitly presented (e.g., picture of a candle and the word "icing" activate the internal representation of a "cake"). For picture-word pairs that combine to elicit an object, signal change was detected in the ventral temporo-occipital regions, pre-SMA, left primary somatomotor cortex, both caudate nuclei, and the dorsal thalami bilaterally. These findings suggest that the left thalamus is engaged for either picture or word stimuli, but the right thalamus appears to be involved when picture stimuli are also presented with words in semantic object activation tasks. The somatomotor signal changes are likely secondary to activation of the semantic object representations from multimodal visual stimuli.

  1. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  2. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  3. Attentional gain and processing capacity limits predict the propensity to neglect unexpected visual stimuli.

    PubMed

    Papera, Massimiliano; Richards, Anne

    2016-05-01

    Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.

  4. The Attention Cascade Model and Attentional Blink

    ERIC Educational Resources Information Center

    Shih, Shui-I

    2008-01-01

    An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…

  5. Brain response to visual sexual stimuli in homosexual pedophiles

    PubMed Central

    Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke

    2008-01-01

    Objective The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. Method A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. Results In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Conclusions Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men. PMID:18197269

  6. Brain response to visual sexual stimuli in homosexual pedophiles.

    PubMed

    Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke

    2008-01-01

    The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men.

  7. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  8. How visual timing and form information affect speech and non-speech processing.

    PubMed

    Kim, Jeesun; Davis, Chris

    2014-10-01

    Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Design Criteria for Visual Cues Used in Disruptive Learning Interventions within Sustainability Education

    ERIC Educational Resources Information Center

    Tillmanns, Tanja; Holland, Charlotte; Filho, Alfredo Salomão

    2017-01-01

    This paper presents the design criteria for Visual Cues--visual stimuli that are used in combination with other pedagogical processes and tools in Disruptive Learning interventions in sustainability education--to disrupt learners' existing frames of mind and help re-orient learners' mind-sets towards sustainability. The theory of Disruptive…

  10. Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.

    PubMed

    van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole

    2008-02-20

    Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.

  11. Oculomotor guidance and capture by irrelevant faces.

    PubMed

    Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan

    2012-01-01

    Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.

  12. Stimulus modality and working memory performance in Greek children with reading disabilities: additional evidence for the pictorial superiority hypothesis.

    PubMed

    Constantinidou, Fofi; Evripidou, Christiana

    2012-01-01

    This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.

  13. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    PubMed

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  14. Dissociating emotion-induced blindness and hypervision.

    PubMed

    Bocanegra, Bruno R; Zeelenberg, René

    2009-12-01

    Previous findings suggest that emotional stimuli sometimes improve (emotion-induced hypervision) and sometimes impair (emotion-induced blindness) the visual perception of subsequent neutral stimuli. We hypothesized that these differential carryover effects might be due to 2 distinct emotional influences in visual processing. On the one hand, emotional stimuli trigger a general enhancement in the efficiency of visual processing that can carry over onto other stimuli. On the other hand, emotional stimuli benefit from a stimulus-specific enhancement in later attentional processing at the expense of competing visual stimuli. We investigated whether detrimental (blindness) and beneficial (hypervision) carryover effects of emotion in perception can be dissociated within a single experimental paradigm. In 2 experiments, we manipulated the temporal competition for attention between an emotional cue word and a subsequent neutral target word by varying cue-target interstimulus interval (ISI) and cue visibility. Interestingly, emotional cues impaired target identification at short ISIs but improved target identification when competition was diminished by either increasing ISI or reducing cue visibility, suggesting that emotional significance of stimuli can improve and impair visual performance through distinct perceptual mechanisms.

  15. A low-cost and versatile system for projecting wide-field visual stimuli within fMRI scanners

    PubMed Central

    Greco, V.; Frijia, F.; Mikellidou, K.; Montanaro, D.; Farini, A.; D’Uva, M.; Poggi, P.; Pucci, M.; Sordini, A.; Morrone, M. C.; Burr, D. C.

    2016-01-01

    We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80°). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields. PMID:26092392

  16. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.

    PubMed

    Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk

    2007-10-01

    In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.

  17. The perception of isoluminant coloured stimuli of amblyopic eye and defocused eye

    NASA Astrophysics Data System (ADS)

    Krumina, Gunta; Ozolinsh, Maris; Ikaunieks, Gatis

    2008-09-01

    In routine eye examination the visual acuity usually is determined using standard charts with black letters on a white background, however contrast and colour are important characteristics of visual perception. The purpose of research was to study the perception of isoluminant coloured stimuli in the cases of true and simulated amlyopia. We estimated difference in visual acuity with isoluminant coloured stimuli comparing to that for high contrast black-white stimuli for true amblyopia and simulated amblyopia. Tests were generated on computer screen. Visual acuity was detected using different charts in two ways: standard achromatic stimuli (black symbols on a white background) and isoluminant coloured stimuli (white symbols on a yellow background, grey symbols on blue, green or red background). Thus isoluminant tests had colour contrast only but had no luminance contrast. Visual acuity evaluated with the standard method and colour tests were studied for subjects with good visual acuity, if necessary using the best vision correction. The same was performed for subjects with defocused eye and with true amblyopia. Defocus was realized with optical lenses placed in front of the normal eye. The obtained results applying the isoluminant colour charts revealed worsening of the visual acuity comparing with the visual acuity estimated with a standard high contrast method (black symbols on a white background).

  18. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  19. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  20. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  1. Fear conditioning to subliminal fear relevant and non fear relevant stimuli.

    PubMed

    Lipp, Ottmar V; Kempnich, Clare; Jee, Sang Hoon; Arnold, Derek H

    2014-01-01

    A growing body of evidence suggests that conscious visual awareness is not a prerequisite for human fear learning. For instance, humans can learn to be fearful of subliminal fear relevant images--images depicting stimuli thought to have been fear relevant in our evolutionary context, such as snakes, spiders, and angry human faces. Such stimuli could have a privileged status in relation to manipulations used to suppress usually salient images from awareness, possibly due to the existence of a designated sub-cortical 'fear module'. Here we assess this proposition, and find it wanting. We use binocular masking to suppress awareness of images of snakes and wallabies (particularly cute, non-threatening marsupials). We find that subliminal presentations of both classes of image can induce differential fear conditioning. These data show that learning, as indexed by fear conditioning, is neither contingent on conscious visual awareness nor on subliminal conditional stimuli being fear relevant.

  2. Methods for Dichoptic Stimulus Presentation in Functional Magnetic Resonance Imaging - A Review

    PubMed Central

    Choubey, Bhaskar; Jurcoane, Alina; Muckli, Lars; Sireteanu, Ruxandra

    2009-01-01

    Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles. PMID:19526076

  3. Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.

    2014-01-01

    In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382

  4. Visual speech discrimination and identification of natural and synthetic consonant stimuli

    PubMed Central

    Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.

    2015-01-01

    From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249

  5. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  6. Visuotactile interaction even in far sagittal space in older adults with decreased gait and balance functions.

    PubMed

    Teramoto, Wataru; Honda, Keito; Furuta, Kento; Sekiyama, Kaoru

    2017-08-01

    Spatial proximity of signals from different sensory modalities is known to be a crucial factor in facilitating efficient multisensory processing in young adults. However, recent studies have demonstrated that older adults exhibit strong visuotactile interactions even when the visual stimuli were presented in a spatially disparate position from a tactile stimulus. This suggests that visuotactile peripersonal space differs between older and younger adults. In the present study, we investigated to what extent peripersonal space expands in the sagittal direction and whether this expansion was linked to the decline in sensorimotor functions in older adults. Vibrotactile stimuli were delivered either to the left or right index finger, while visual stimuli were presented at a distance of 5 cm (near), 37.5 cm (middle), or 70 cm (far) from each finger. The participants had to respond rapidly to a randomized sequence of unimodal (visual or tactile) and simultaneous visuotactile targets (i.e., a redundant target paradigm). Sensorimotor functions were independently assessed by the Timed Up and Go (TUG) and postural stability tests. Results showed that reaction times to the visuotactile bimodal stimuli were significantly faster than those to the unimodal stimuli, irrespective of age group [younger adults: 22.0 ± 0.6 years, older adults: 75.0 ± 3.3 years (mean ± SD)] and target distance. Of importance, a race model analysis revealed that the co-activation model (i.e., visuotactile multisensory integrative process) is supported in the far condition especially for older adults with relatively poor performance on the TUG or postural stability tests. These results suggest that aging can change visuotactile peripersonal space and that it may be closely linked to declines in sensorimotor functions related to gait and balance in older adults.

  7. When Do Words Hurt? A Multiprocess View of the Effects of Verbalization on Visual Memory

    ERIC Educational Resources Information Center

    Brown, Charity; Brandimonte, Maria A.; Wickham, Lee H. V.; Bosco, Andrea; Schooler, Jonathan W.

    2014-01-01

    Verbal overshadowing reflects the impairment in memory performance following verbalization of nonverbal stimuli. However, it is not clear whether the same mechanisms are responsible for verbal overshadowing effects observed with different stimuli and task demands. In the present article, we propose a multiprocess view that reconciles the main…

  8. Brain Mechanisms Involved in Early Visual Perception.

    ERIC Educational Resources Information Center

    Karmel, Bernard Z.

    This document presents an analysis of the early attending responses and orienting reactions of infants which can be observed at birth and shortly thereafter. Focus is on one specific orienting reaction, the early direction and maintenance of one's eyes and head toward certain stimuli instead of others. The physical properties of stimuli that…

  9. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  10. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    PubMed

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p < 0.05). Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  11. Functional neuroimaging studies of sexual arousal and orgasm in healthy men and women: a review and meta-analysis.

    PubMed

    Stoléru, Serge; Fonteille, Véronique; Cornélis, Christel; Joyal, Christian; Moulier, Virginie

    2012-07-01

    In the last fifteen years, functional neuroimaging techniques have been used to investigate the neuroanatomical correlates of sexual arousal in healthy human subjects. In most studies, subjects have been requested to watch visual sexual stimuli and control stimuli. Our review and meta-analysis found that in heterosexual men, sites of cortical activation consistently reported across studies are the lateral occipitotemporal, inferotemporal, parietal, orbitofrontal, medial prefrontal, insular, anterior cingulate, and frontal premotor cortices as well as, for subcortical regions, the amygdalas, claustrum, hypothalamus, caudate nucleus, thalami, cerebellum, and substantia nigra. Heterosexual and gay men show a similar pattern of activation. Visual sexual stimuli activate the amygdalas and thalami more in men than in women. Ejaculation is associated with decreased activation throughout the prefrontal cortex. We present a neurophenomenological model to understand how these multiple regional brain responses could account for the varied facets of the subjective experience of sexual arousal. Further research should shift from passive to active paradigms, focus on functional connectivity and use subliminal presentation of stimuli. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Working memory-driven attention improves spatial resolution: Support for perceptual enhancement.

    PubMed

    Pan, Yi; Luo, Qianying; Cheng, Min

    2016-08-01

    Previous research has indicated that attention can be biased toward those stimuli matching the contents of working memory and thereby facilitates visual processing at the location of the memory-matching stimuli. However, whether this working memory-driven attentional modulation takes place on early perceptual processes remains unclear. Our present results showed that working memory-driven attention improved identification of a brief Landolt target presented alone in the visual field. Because the suprathreshold target appeared without any external noise added (i.e., no distractors or masks), the results suggest that working memory-driven attention enhances the target signal at early perceptual stages of visual processing. Furthermore, given that performance in the Landolt target identification task indexes spatial resolution, this attentional facilitation indicates that working memory-driven attention can boost early perceptual processing via enhancement of spatial resolution at the attended location.

  13. Measuring social attention and motivation in autism spectrum disorder using eye-tracking: Stimulus type matters.

    PubMed

    Chevallier, Coralie; Parish-Morris, Julia; McVey, Alana; Rump, Keiran M; Sasson, Noah J; Herrington, John D; Schultz, Robert T

    2015-10-01

    Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  14. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli.

    PubMed

    Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio

    2016-05-15

    When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Subcortical BOLD responses during visual sexual stimulation vary as a function of implicit porn associations in women

    PubMed Central

    de Jong, Peter J.; Georgiadis, Janniko R.

    2014-01-01

    Lifetime experiences shape people’s attitudes toward sexual stimuli. Visual sexual stimulation (VSS), for instance, may be perceived as pleasurable by some, but as disgusting or ambiguous by others. VSS depicting explicit penile–vaginal penetration (PEN) is relevant in this respect, because the act of penetration is a core sexual activity. In this study, 20 women without sexual complaints participated. We used functional magnetic resonance imaging and a single-target implicit association task to investigate how brain responses to PEN were modulated by the initial associations in memory (PEN-‘hot’ vs PEN-disgust) with such hardcore pornographic stimuli. Many brain areas responded to PEN in the same way they responded to disgust stimuli, and PEN-induced brain activity was prone to modulation by subjective disgust ratings toward PEN stimuli. The relative implicit PEN-disgust (relative to PEN-‘hot’) associations exclusively modulated PEN-induced brain responses: comparatively negative (PEN-disgust) implicit associations with pornography predicted the strongest PEN-related responses in the basal forebrain (including nucleus accumbens and bed nucleus of stria terminalis), midbrain and amygdala. Since these areas are often implicated in visual sexual processing, the present findings should be taken as a warning: apparently their involvement may also indicate a negative or ambivalent attitude toward sexual stimuli. PMID:23051899

  16. Subcortical BOLD responses during visual sexual stimulation vary as a function of implicit porn associations in women.

    PubMed

    Borg, Charmaine; de Jong, Peter J; Georgiadis, Janniko R

    2014-02-01

    Lifetime experiences shape people's attitudes toward sexual stimuli. Visual sexual stimulation (VSS), for instance, may be perceived as pleasurable by some, but as disgusting or ambiguous by others. VSS depicting explicit penile-vaginal penetration (PEN) is relevant in this respect, because the act of penetration is a core sexual activity. In this study, 20 women without sexual complaints participated. We used functional magnetic resonance imaging and a single-target implicit association task to investigate how brain responses to PEN were modulated by the initial associations in memory (PEN-'hot' vs PEN-disgust) with such hardcore pornographic stimuli. Many brain areas responded to PEN in the same way they responded to disgust stimuli, and PEN-induced brain activity was prone to modulation by subjective disgust ratings toward PEN stimuli. The relative implicit PEN-disgust (relative to PEN-'hot') associations exclusively modulated PEN-induced brain responses: comparatively negative (PEN-disgust) implicit associations with pornography predicted the strongest PEN-related responses in the basal forebrain (including nucleus accumbens and bed nucleus of stria terminalis), midbrain and amygdala. Since these areas are often implicated in visual sexual processing, the present findings should be taken as a warning: apparently their involvement may also indicate a negative or ambivalent attitude toward sexual stimuli.

  17. The Effects of Social Anxiety and State Anxiety on Visual Attention: Testing the Vigilance-Avoidance Hypothesis.

    PubMed

    Singh, J Suzanne; Capozzoli, Michelle C; Dodd, Michael D; Hope, Debra A

    2015-01-01

    A growing theoretical and research literature suggests that trait and state social anxiety can predict attentional patterns in the presence of emotional stimuli. The current study adds to this literature by examining the effects of state anxiety on visual attention and testing the vigilance-avoidance hypothesis, using a method of continuous visual attentional assessment. Participants were 91 undergraduate college students with high or low trait fear of negative evaluation (FNE), a core aspect of social anxiety, who were randomly assigned to either a high or low state anxiety condition. Participants engaged in a free view task in which pairs of emotional facial stimuli were presented and eye movements were continuously monitored. Overall, participants with high FNE avoided angry stimuli and participants with high state anxiety attended to positive stimuli. Participants with high state anxiety and high FNE were avoidant of angry faces, whereas participants with low state and low FNE exhibited a bias toward angry faces. The study provided partial support for the vigilance-avoidance hypothesis. The findings add to the mixed results in the literature that suggest that both positive and negative emotional stimuli may be important in understanding the complex attention patterns associated with social anxiety. Clinical implications and suggestions for future research are discussed.

  18. Attentional bias for nondrug reward is magnified in addiction.

    PubMed

    Anderson, Brian A; Faulkner, Monica L; Rilee, Jessica J; Yantis, Steven; Marvel, Cherie L

    2013-12-01

    Attentional biases for drug-related stimuli play a prominent role in addiction, predicting treatment outcomes. Attentional biases also develop for stimuli that have been paired with nondrug rewards in adults without a history of addiction, the magnitude of which is predicted by visual working-memory capacity and impulsiveness. We tested the hypothesis that addiction is associated with an increased attentional bias for nondrug (monetary) reward relative to that of healthy controls, and that this bias is related to working-memory impairments and increased impulsiveness. Seventeen patients receiving methadone-maintenance treatment for opioid dependence and 17 healthy controls participated. Impulsiveness was measured using the Barratt Impulsiveness Scale (BIS-11; Patton, Stanford, & Barratt, 1995), visual working-memory capacity was measured as the ability to recognize briefly presented color stimuli, and attentional bias was measured as the magnitude of response time slowing caused by irrelevant but previously reward-associated distractors in a visual-search task. The results showed that attention was biased toward the distractors across all participants, replicating previous findings. It is important to note, this bias was significantly greater in the patients than in the controls and was negatively correlated with visual working-memory capacity. Patients were also significantly more impulsive than controls as a group. Our findings demonstrate that patients in treatment for addiction experience greater difficulty ignoring stimuli associated with nondrug reward. This nonspecific reward-related bias could mediate the distracting quality of drug-related stimuli previously observed in addiction.

  19. Attentional Bias for Non-drug Reward is Magnified in Addiction

    PubMed Central

    Anderson, Brian A.; Faulkner, Monica L.; Rilee, Jessica J.; Yantis, Steven; Marvel, Cherie L.

    2014-01-01

    Attentional biases for drug-related stimuli play a prominent role in addiction, predicting treatment outcome. Attentional biases also develop for stimuli that have been paired with non-drug reward in adults without a history of addiction, the magnitude of which is predicted by visual working memory capacity and impulsiveness. We tested the hypothesis that addiction is associated with an increased attentional bias for non-drug (monetary) reward relative to that of healthy controls, and that this bias is related to working memory impairments and increased impulsiveness. Seventeen patients receiving methadone maintenance treatment for opioid dependence and seventeen healthy controls participated. Impulsiveness was measured using the Barratt Impulsiveness Scale (BIS-11), visual working memory capacity was measured as the ability to recognize briefly presented color stimuli, and attentional bias was measured as the magnitude of response time slowing caused by irrelevant but previously reward-associated distractors in a visual search task. The results showed that attention was biased toward the distractors across all participants, replicating previous findings. Importantly, this bias was significantly greater in the patients than in the controls and was negatively correlated with visual working memory capacity. Patients were also significantly more impulsive than controls as a group. Our findings demonstrate that patients in treatment for addiction experience greater difficulty ignoring stimuli associated with non-drug reward. This non-specific reward-related bias could mediate the distracting quality of drug-related stimuli previously observed in addiction. PMID:24128148

  20. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    PubMed

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of events. Copyright © 2017 the American Physiological Society.

  1. Representation of visual symbols in the visual word processing network.

    PubMed

    Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S

    2015-03-01

    Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Earlier Visual N1 Latencies in Expert Video-Game Players: A Temporal Basis of Enhanced Visuospatial Performance?

    PubMed Central

    Latham, Andrew J.; Patston, Lucy L. M.; Westermann, Christine; Kirk, Ian J.; Tippett, Lynette J.

    2013-01-01

    Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction. PMID:24058667

  3. Earlier visual N1 latencies in expert video-game players: a temporal basis of enhanced visuospatial performance?

    PubMed

    Latham, Andrew J; Patston, Lucy L M; Westermann, Christine; Kirk, Ian J; Tippett, Lynette J

    2013-01-01

    Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction.

  4. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    PubMed

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  5. Is Visually Guided Reaching in Early Infancy a Myth?

    ERIC Educational Resources Information Center

    Clifton, Rachel K.; And Others

    1993-01-01

    Seven infants were tested between the ages of 6 and 25 weeks to see how they would grasp objects presented in full light and glowing or sounding objects presented in total darkness. In all three conditions, the infants first grasped the objects at nearly the same time, suggesting that internal stimuli, not visual guidance, directed their actions.…

  6. Conscious control over the content of unconscious cognition.

    PubMed

    Kunde, Wilfried; Kiesel, Andrea; Hoffmann, Joachim

    2003-06-01

    Visual stimuli (primes) presented too briefly to be consciously identified can nevertheless affect responses to subsequent stimuli - an instance of unconscious cognition. There is a lively debate as to whether such priming effects originate from unconscious semantic processing of the primes or from reactivation of learned motor responses that conscious stimuli afford during preceding practice. In four experiments we demonstrate that unconscious stimuli owe their impact neither to automatic semantic categorization nor to memory traces of preceding stimulus-response episodes, but to their match with pre-specified cognitive action-trigger conditions. The intentional creation of such triggers allows actors to control the way unconscious stimuli bias their behaviour.

  7. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    NASA Astrophysics Data System (ADS)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  8. Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli.

    PubMed

    Minakata, Katsumi; Gondan, Matthias

    2018-05-01

    When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds.

  9. Right Visual Field Advantage for Perceived Contrast: Correlation with an Auditory Bias and Handedness

    ERIC Educational Resources Information Center

    Railo, H.; Tallus, J.; Hamalainen, H.

    2011-01-01

    Studies have suggested that supramodal attentional resources are biased rightward due to asymmetric spatial fields of the two hemispheres. This bias has been observed especially in right-handed subjects. We presented left and right-handed subjects with brief uniform grey visual stimuli in either the left or right visual hemifield. Consistent with…

  10. Compatibility of motion facilitates visuomotor synchronization.

    PubMed

    Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L

    2010-12-01

    Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.

  11. Scopolamine effects on visual discrimination: modifications related to stimulus control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, H.L.

    1975-01-01

    Stumptail monkeys (Macaca arctoides) performed a discrete trial, three-choice visual discrimination. The discrimination behavior was controlled by the shape of the visual stimuli. Strength of the stimuli in controlling behavior was systematically related to a physical property of the stimuli, luminance. Low luminance provided weak control, resulting in a low accuracy of discrimination, a low response probability and maximal sensitivity to scopolamine (7.5-60 ..mu..g/kg). In contrast, high luminance provided strong control of behavior and attenuated the effects of scopolamine. Methylscopolamine had no effect in doses of 30 to 90 ..mu..g/kg. Scopolamine effects resembled the effects of reducing stimulus control inmore » undrugged monkeys. Since behavior under weak control seems to be especially sensitive to drugs, manipulations of stimulus control may be particularly useful whenever determination of the minimally-effective dose is important, as in behavioral toxicology. Present results are interpreted as specific visual effects of the drug, since nonsensory factors such as base-line response rate, reinforcement schedule, training history, motor performance and motivation were controlled. Implications for state-dependent effects of drugs are discussed.« less

  12. Multisensory integration and the concert experience: An overview of how visual stimuli can affect what we hear

    NASA Astrophysics Data System (ADS)

    Hyde, Jerald R.

    2004-05-01

    It is clear to those who ``listen'' to concert halls and evaluate their degree of acoustical success that it is quite difficult to separate the acoustical response at a given seat from the multi-modal perception of the whole event. Objective concert hall data have been collected for the purpose of finding a link with their related subjective evaluation and ultimately with the architectural correlates which produce the sound field. This exercise, while important, tends to miss the point that a concert or opera event utilizes all the senses of which the sound field and visual stimuli are both major contributors to the experience. Objective acoustical factors point to visual input as being significant in the perception of ``acoustical intimacy'' and with the perception of loudness versus distance in large halls. This paper will review the evidence of visual input as a factor in what we ``hear'' and introduce concepts of perceptual constancy, distance perception, static and dynamic visual stimuli, and the general process of the psychology of the integrated experience. A survey of acousticians on their opinions about the auditory-visual aspects of the concert hall experience will be presented. [Work supported in part from the Veneklasen Research Foundation and Veneklasen Associates.

  13. A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids

    NASA Astrophysics Data System (ADS)

    Hwang, Han-Jeong; Ferreria, Valeria Y.; Ulrich, Daniel; Kilic, Tayfun; Chatziliadis, Xenofon; Blankertz, Benjamin; Treder, Matthias

    2015-10-01

    A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.

  14. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Rett syndrome: basic features of visual processing-a pilot study of eye-tracking.

    PubMed

    Djukic, Aleksandra; Valicenti McDermott, Maria; Mavrommatis, Kathleen; Martins, Cristina L

    2012-07-01

    Consistently observed "strong eye gaze" has not been validated as a means of communication in girls with Rett syndrome, ubiquitously affected by apraxia, unable to reply either verbally or manually to questions during formal psychologic assessment. We examined nonverbal cognitive abilities and basic features of visual processing (visual discrimination attention/memory) by analyzing patterns of visual fixation in 44 girls with Rett syndrome, compared with typical control subjects. To determine features of visual fixation patterns, multiple pictures (with the location of the salient and presence/absence of novel stimuli as variables) were presented on the screen of a TS120 eye-tracker. Of the 44, 35 (80%) calibrated and exhibited meaningful patterns of visual fixation. They looked longer at salient stimuli (cartoon, 2.8 ± 2 seconds S.D., vs shape, 0.9 ± 1.2 seconds S.D.; P = 0.02), regardless of their position on the screen. They recognized novel stimuli, decreasing the fixation time on the central image when another image appeared on the periphery of the slide (2.7 ± 1 seconds S.D. vs 1.8 ± 1 seconds S.D., P = 0.002). Eye-tracking provides a feasible method for cognitive assessment and new insights into the "hidden" abilities of individuals with Rett syndrome. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Modality-specific effects on crosstalk in task switching: evidence from modality compatibility using bimodal stimulation.

    PubMed

    Stephan, Denise Nadine; Koch, Iring

    2016-11-01

    The present study was aimed at examining modality-specific influences in task switching. To this end, participants switched either between modality compatible tasks (auditory-vocal and visual-manual) or incompatible spatial discrimination tasks (auditory-manual and visual-vocal). In addition, auditory and visual stimuli were presented simultaneously (i.e., bimodally) in each trial, so that selective attention was required to process the task-relevant stimulus. The inclusion of bimodal stimuli enabled us to assess congruence effects as a converging measure of increased between-task interference. The tasks followed a pre-instructed sequence of double alternations (AABB), so that no explicit task cues were required. The results show that switching between two modality incompatible tasks increases both switch costs and congruence effects compared to switching between two modality compatible tasks. The finding of increased congruence effects in modality incompatible tasks supports our explanation in terms of ideomotor "backward" linkages between anticipated response effects and the stimuli that called for this response in the first place. According to this generalized ideomotor idea, the modality match between response effects and stimuli would prime selection of a response in the compatible modality. This priming would cause increased difficulties to ignore the competing stimulus and hence increases the congruence effect. Moreover, performance would be hindered when switching between modality incompatible tasks and facilitated when switching between modality compatible tasks.

  17. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  18. Selective attention reduces physiological noise in the external ear canals of humans. II: Visual attention

    PubMed Central

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070

  19. The Influence of Stimulus Material on Attention and Performance in the Visual Expectation Paradigm: A Longitudinal Study with 3- And 6-Month-Old Infants

    ERIC Educational Resources Information Center

    Teubert, Manuel; Lohaus, Arnold; Fassbender, Ina; Vierhaus, Marc; Spangler, Sibylle; Borchert, Sonja; Freitag, Claudia; Goertz, Claudia; Graf, Frauke; Gudi, Helene; Kolling, Thorsten; Lamm, Bettina; Keller, Heidi; Knopf, Monika; Schwarzer, Gudrun

    2012-01-01

    This longitudinal study examined the influence of stimulus material on attention and expectation learning in the visual expectation paradigm. Female faces were used as attention-attracting stimuli, and non-meaningful visual stimuli of comparable complexity (Greebles) were used as low attention-attracting stimuli. Expectation learning performance…

  20. Effects of perceptual load and socially meaningful stimuli on crossmodal selective attention in Autism Spectrum Disorder and neurotypical samples.

    PubMed

    Tyndall, Ian; Ragless, Liam; O'Hora, Denis

    2018-04-01

    The present study examined whether increasing visual perceptual load differentially affected both Socially Meaningful and Non-socially Meaningful auditory stimulus awareness in neurotypical (NT, n = 59) adults and Autism Spectrum Disorder (ASD, n = 57) adults. On a target trial, an unexpected critical auditory stimulus (CAS), either a Non-socially Meaningful ('beep' sound) or Socially Meaningful ('hi') stimulus, was played concurrently with the presentation of the visual task. Under conditions of low visual perceptual load both NT and ASD samples reliably noticed the CAS at similar rates (77-81%), whether the CAS was Socially Meaningful or Non-socially Meaningful. However, during high visual perceptual load NT and ASD participants reliably noticed the meaningful CAS (NT = 71%, ASD = 67%), but NT participants were unlikely to notice the Non-meaningful CAS (20%), whereas ASD participants reliably noticed it (80%), suggesting an inability to engage selective attention to ignore non-salient irrelevant distractor stimuli in ASD. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Sensitivity and integration in a visual pathway for circadian entrainment in the hamster (Mesocricetus auratus).

    PubMed Central

    Nelson, D E; Takahashi, J S

    1991-01-01

    1. Light-induced phase shifts of the circadian rhythm of wheel-running activity were used to measure the photic sensitivity of a circadian pacemaker and the visual pathway that conveys light information to it in the golden hamster (Mesocricetus auratus). The sensitivity to stimulus irradiance and duration was assessed by measuring the magnitude of phase-shift responses to photic stimuli of different irradiance and duration. The visual sensitivity was also measured at three different phases of the circadian rhythm. 2. The stimulus-response curves measured at different circadian phases suggest that the maximum phase-shift is the only aspect of visual responsivity to change as a function of the circadian day. The half-saturation constants (sigma) for the stimulus-response curves are not significantly different over the three circadian phases tested. The photic sensitivity to irradiance (1/sigma) appears to remain constant over the circadian day. 3. The hamster circadian pacemaker and the photoreceptive system that subserves it are more sensitive to the irradiance of longer-duration stimuli than to irradiance of briefer stimuli. The system is maximally sensitive to the irradiance of stimuli of 300 s and longer in duration. A quantitative model is presented to explain the changes that occur in the stimulus-response curves as a function of photic stimulus duration. 4. The threshold for photic stimulation of the hamster circadian pacemaker is also quite high. The threshold irradiance (the minimum irradiance necessary to induce statistically significant responses) is approximately 10(11) photons cm-2 s-1 for optimal stimulus durations. This threshold is equivalent to a luminance at the cornea of 0.1 cd m-2. 5. We also measured the sensitivity of this visual pathway to the total number of photons in a stimulus. This system is maximally sensitive to photons in stimuli between 30 and 3600 s in duration. The maximum quantum efficiency of photic integration occurs in 300 s stimuli. 6. These results suggest that the visual pathways that convey light information to the mammalian circadian pacemaker possess several unique characteristics. These pathways are relatively insensitive to light irradiance and also integrate light inputs over relatively long durations. This visual system, therefore, possesses an optimal sensitivity of 'tuning' to total photons delivered in stimuli of several minutes in duration. Together these characteristics may make this visual system unresponsive to environmental 'noise' that would interfere with the entrainment of circadian rhythms to light-dark cycles. PMID:1895235

  2. Toward a hybrid brain-computer interface based on repetitive visual stimuli with missing events.

    PubMed

    Wu, Yingying; Li, Man; Wang, Jing

    2016-07-26

    Steady-state visually evoked potentials (SSVEPs) can be elicited by repetitive stimuli and extracted in the frequency domain with satisfied performance. However, the temporal information of such stimulus is often ignored. In this study, we utilized repetitive visual stimuli with missing events to present a novel hybrid BCI paradigm based on SSVEP and omitted stimulus potential (OSP). Four discs flickering from black to white with missing flickers served as visual stimulators to simultaneously elicit subject's SSVEPs and OSPs. Key parameters in the new paradigm, including flicker frequency, optimal electrodes, missing flicker duration and intervals of missing events were qualitatively discussed with offline data. Two omitted flicker patterns including missing black/white disc were proposed and compared. Averaging times were optimized with Information Transfer Rate (ITR) in online experiments, where SSVEPs and OSPs were identified using Canonical Correlation Analysis in the frequency domain and Support Vector Machine (SVM)-Bayes fusion in the time domain, respectively. The online accuracy and ITR (mean ± standard deviation) over nine healthy subjects were 79.29 ± 18.14 % and 19.45 ± 11.99 bits/min with missing black disc pattern, and 86.82 ± 12.91 % and 24.06 ± 10.95 bits/min with missing white disc pattern, respectively. The proposed BCI paradigm, for the first time, demonstrated that SSVEPs and OSPs can be simultaneously elicited in single visual stimulus pattern and recognized in real-time with satisfied performance. Besides the frequency features such as SSVEP elicited by repetitive stimuli, we found a new feature (OSP) in the time domain to design a novel hybrid BCI paradigm by adding missing events in repetitive stimuli.

  3. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  4. The retention and disruption of color information in human short-term visual memory.

    PubMed

    Nemes, Vanda A; Parry, Neil R A; Whitaker, David; McKeefry, Declan J

    2012-01-27

    Previous studies have demonstrated that the retention of information in short-term visual perceptual memory can be disrupted by the presentation of masking stimuli during interstimulus intervals (ISIs) in delayed discrimination tasks (S. Magnussen & W. W. Greenlee, 1999). We have exploited this effect in order to determine to what extent short-term perceptual memory is selective for stimulus color. We employed a delayed hue discrimination paradigm to measure the fidelity with which color information was retained in short-term memory. The task required 5 color normal observers to discriminate between spatially non-overlapping colored reference and test stimuli that were temporally separated by an ISI of 5 s. The points of subjective equality (PSEs) on the resultant psychometric matching functions provided an index of performance. Measurements were made in the presence and absence of mask stimuli presented during the ISI, which varied in hue around the equiluminant plane in DKL color space. For all reference stimuli, we found a consistent mask-induced, hue-dependent shift in PSE compared to the "no mask" conditions. These shifts were found to be tuned in color space, only occurring for a range of mask hues that fell within bandwidths of 29-37 deg. Outside this range, masking stimuli had little or no effect on measured PSEs. The results demonstrate that memory masking for color exhibits selectivity similar to that which has already been demonstrated for other visual attributes. The relatively narrow tuning of these interference effects suggests that short-term perceptual memory for color is based on higher order, non-linear color coding. © ARVO

  5. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    NASA Astrophysics Data System (ADS)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  6. Full-wave and half-wave rectification in second-order motion perception

    NASA Technical Reports Server (NTRS)

    Solomon, J. A.; Sperling, G.

    1994-01-01

    Microbalanced stimuli are dynamic displays which do not stimulate motion mechanisms that apply standard (Fourier-energy or autocorrelational) motion analysis directly to the visual signal. In order to extract motion information from microbalanced stimuli, Chubb and Sperling [(1988) Journal of the Optical Society of America, 5, 1986-2006] proposed that the human visual system performs a rectifying transformation on the visual signal prior to standard motion analysis. The current research employs two novel types of microbalanced stimuli: half-wave stimuli preserve motion information following half-wave rectification (with a threshold) but lose motion information following full-wave rectification; full-wave stimuli preserve motion information following full-wave rectification but lose motion information following half-wave rectification. Additionally, Fourier stimuli, ordinary square-wave gratings, were used to stimulate standard motion mechanisms. Psychometric functions (direction discrimination vs stimulus contrast) were obtained for each type of stimulus when presented alone, and when masked by each of the other stimuli (presented as moving masks and also as nonmoving, counterphase-flickering masks). RESULTS: given sufficient contrast, all three types of stimulus convey motion. However, only one-third of the population can perceive the motion of the half-wave stimulus. Observers are able to process the motion information contained in the Fourier stimulus slightly more efficiently than the information in the full-wave stimulus but are much less efficient in processing half-wave motion information. Moving masks are more effective than counterphase masks at hampering direction discrimination, indicating that some of the masking effect is interference between motion mechanisms, and some occurs at earlier stages. When either full-wave and Fourier or half-wave and Fourier gratings are presented simultaneously, there is a wide range of relative contrasts within which the motion directions of both gratings are easily determinable. Conversely, when half-wave and full-wave gratings are combined, the direction of only one of these gratings can be determined with high accuracy. CONCLUSIONS: the results indicate that three motion computations are carried out, any two in parallel: one standard ("first order") and two non-Fourier ("second-order") computations that employ full-wave and half-wave rectification.

  7. Interest Inventory Items as Reinforcing Stimuli: A Test of the A-R-D Theory.

    ERIC Educational Resources Information Center

    Staats, Arthur W.; And Others

    An experiement was conducted to test the hypothesis that interest inventory items would function as reinforcing stimuli in a visual discrimination task. When previously rated liked and disliked items from the Strong Vocational Interest Blank were differentially presented following one of two responses, subjects learned to respond to the stimulus…

  8. Perceptual uncertainty facilitates creative discovery

    NASA Astrophysics Data System (ADS)

    Tseng, Winger Sei-Wo

    2018-06-01

    In this study, unstructured and ambiguous figures used as visual stimuli were classified as having high, moderate, and low ambiguity and presented to participants. The Experiment was designed to explore how the perceptual ambiguity that is inherent within presented visual cues can affect novice and expert designers' visual discovery during design development. A total number of 42 participants, half of them were recruited from non-design departments as novices. The remaining were chosen from design companies regarded as experts. The participants were tasked with discovering a sub-shape from the presented sketch and using this shape as a cue to design a concept. To this end, two types of sub-shapes were defined: known feature sub-shapes and innovative feature sub-shapes (IFSs). The experimental results strongly evidence that with an increase in the ambiguity of the visual stimuli, expert designers produce more ideas and IFSs, whereas novice designers produce fewer. The capability of expert designers to exploit visual ambiguity is interesting, and its absence in novice designers suggests that this capability is likely a unique skill gained, at least in part, through professional practice. Our results can be applied in design learning and education to generalize the principles and strategies of visual discovery by expert designers during concept sketching in order to train novice designers in addressing design problems.

  9. The mere exposure effect for visual image.

    PubMed

    Inoue, Kazuya; Yagi, Yoshihiko; Sato, Nobuya

    2018-02-01

    Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.

  10. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  11. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment

    PubMed Central

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454

  12. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    PubMed

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  13. Perceived duration decreases with increasing eccentricity.

    PubMed

    Kliegl, Katrin M; Huckauf, Anke

    2014-07-01

    Previous studies examining the influence of stimulus location on temporal perception yield inhomogeneous and contradicting results. Therefore, the aim of the present study is to soundly examine the effect of stimulus eccentricity. In a series of five experiments, subjects compared the duration of foveal disks to disks presented at different retinal eccentricities on the horizontal meridian. The results show that the perceived duration of a visual stimulus declines with increasing eccentricity. The effect was replicated with various stimulus orders (Experiments 1-3), as well as with cortically magnified stimuli (Experiments 4-5), ruling out that the effect was merely caused by different cortical representation sizes. The apparent decreasing duration of stimuli with increasing eccentricity is discussed with respect to current models of time perception, the possible influence of visual attention and respective underlying physiological characteristics of the visual system. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Unfolding Visual Lexical Decision in Time

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  15. To what extent do Gestalt grouping principles influence tactile perception?

    PubMed

    Gallace, Alberto; Spence, Charles

    2011-07-01

    Since their formulation by the Gestalt movement more than a century ago, the principles of perceptual grouping have primarily been investigated in the visual modality and, to a lesser extent, in the auditory modality. The present review addresses the question of whether the same grouping principles also affect the perception of tactile stimuli. Although, to date, only a few studies have explicitly investigated the existence of Gestalt grouping principles in the tactile modality, we argue that many more studies have indirectly provided evidence relevant to this topic. Reviewing this body of research, we argue that similar principles to those reported previously in visual and auditory studies also govern the perceptual grouping of tactile stimuli. In particular, we highlight evidence showing that the principles of proximity, similarity, common fate, good continuation, and closure affect tactile perception in both unimodal and crossmodal settings. We also highlight that the grouping of tactile stimuli is often affected by visual and auditory information that happen to be presented simultaneously. Finally, we discuss the theoretical and applied benefits that might pertain to the further study of Gestalt principles operating in both unisensory and multisensory tactile perception.

  16. Auditory Memory Distortion for Spoken Prose

    PubMed Central

    Hutchison, Joanna L.; Hubbard, Timothy L.; Ferrandino, Blaise; Brigante, Ryan; Wright, Jamie M.; Rypma, Bart

    2013-01-01

    Observers often remember a scene as containing information that was not presented but that would have likely been located just beyond the observed boundaries of the scene. This effect is called boundary extension (BE; e.g., Intraub & Richardson, 1989). Previous studies have observed BE in memory for visual and haptic stimuli, and the present experiments examined whether BE occurred in memory for auditory stimuli (prose, music). Experiments 1 and 2 varied the amount of auditory content to be remembered. BE was not observed, but when auditory targets contained more content, boundary restriction (BR) occurred. Experiment 3 presented auditory stimuli with less content and BR also occurred. In Experiment 4, white noise was added to stimuli with less content to equalize the durations of auditory stimuli, and BR still occurred. Experiments 5 and 6 presented trained stories and popular music, and BR still occurred. This latter finding ruled out the hypothesis that the lack of BE in Experiments 1–4 reflected a lack of familiarity with the stimuli. Overall, memory for auditory content exhibited BR rather than BE, and this pattern was stronger if auditory stimuli contained more content. Implications for the understanding of general perceptual processing and directions for future research are discussed. PMID:22612172

  17. Context processing in adolescents with autism spectrum disorder: How complex could it be?

    PubMed

    Ben-Yosef, Dekel; Anaki, David; Golan, Ofer

    2017-03-01

    The ability of individuals with Autism Spectrum Disorder (ASD) to process context has long been debated: According to the Weak Central Coherence theory, ASD is characterized by poor global processing, and consequently-poor context processing. In contrast, the Social Cognition theory argues individuals with ASD will present difficulties only in social context processing. The complexity theory of autism suggests context processing in ASD will depend on task complexity. The current study examined this controversy through two priming tasks, one presenting human stimuli (facial expressions) and the other presenting non-human stimuli (animal faces). Both tasks presented visual targets, preceded by congruent, incongruent, or neutral auditory primes. Local and global processing were examined by presenting the visual targets in three spatial frequency conditions: High frequency, low frequency, and broadband. Tasks were administered to 16 adolescents with high functioning ASD and 16 matched typically developing adolescents. Reaction time and accuracy were measured for each task in each condition. Results indicated that individuals with ASD processed context for both human and non-human stimuli, except in one condition, in which human stimuli had to be processed globally (i.e., target presented in low frequency). The task demands presented in this condition, and the performance deficit shown in the ASD group as a result, could be understood in terms of cognitive overload. These findings provide support for the complexity theory of autism and extend it. Our results also demonstrate how associative priming could support intact context processing of human and non-human stimuli in individuals with ASD. Autism Res 2017, 10: 520-530. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  18. [Event-related synchronization/desynhronization during processing of target, no target and unknown visually presented words].

    PubMed

    Rebreikina, A B; Larionova, E B; Varlamov, A A

    2015-01-01

    The aim of this investigation is to study neurophysiologic mechanisms of processing of relevant words and unknown words. Event-related synchronization/desynchronization during categorization of three types of stimuli (known targets, known no targets and unknown words) was examined. The main difference between known targets and unknown stimuli was revealed in the thetal and theta2 bands at the early stage after stimuli onset (150-300 ms) and in the delta band (400-700 ms). In the late time window at about 800-1500 ms thetal ERS in response to the target stimuli was smaller than to other stimuli, but theta2 and alpha ERD in response to the target stimuli was larger than to known nontarget words.

  19. Interhemispheric Resource Sharing: Decreasing Benefits with Increasing Processing Efficiency

    ERIC Educational Resources Information Center

    Maertens, M.; Pollmann, S.

    2005-01-01

    Visual matches are sometimes faster when stimuli are presented across visual hemifields, compared to within-field matching. Using a cued geometric figure matching task, we investigated the influence of computational complexity vs. processing efficiency on this bilateral distribution advantage (BDA). Computational complexity was manipulated by…

  20. Task modulation of the effects of brightness on reaction time and response force.

    PubMed

    Jaśkowski, Piotr; Włodarczyk, Dariusz

    2006-08-01

    Van der Molen and Keuss [van der Molen, M.W., Keuss, P.J.G., 1979. The relationship between reaction time and intensity in discrete auditory tasks. Quarterly Journal of Experimental Psychology 31, 95-102; van der Molen, M.W., Keuss, P.J.G., 1981. Response selection and the processing of auditory intensity. Quarterly Journal of Experimental Psychology 33, 177-184] showed that paradoxically long reaction times (RT) occur with extremely loud auditory stimuli when the task is difficult (e.g. needs a response choice). It was argued that this paradoxical behavior of RT is due to active suppression of response prompting to prevent false responses. In the present experiments, we demonstrated that such an effect can also occur for visual stimuli provided that they are large enough. Additionally, we showed that response force exerted by participants on response keys monotonically grew with intensity for large stimuli but was independent of intensity for small visual stimuli. Bearing in mind that only large stimuli are believed to be arousing this pattern of results supports the arousal interpretation of the negative effect of loud stimuli on RT given by van der Molen and Keuss.

  1. Relational Associative Learning Induces Cross-Modal Plasticity in Early Visual Cortex

    PubMed Central

    Headley, Drew B.; Weinberger, Norman M.

    2015-01-01

    Neurobiological theories of memory posit that the neocortex is a storage site of declarative memories, a hallmark of which is the association of two arbitrary neutral stimuli. Early sensory cortices, once assumed uninvolved in memory storage, recently have been implicated in associations between neutral stimuli and reward or punishment. We asked whether links between neutral stimuli also could be formed in early visual or auditory cortices. Rats were presented with a tone paired with a light using a sensory preconditioning paradigm that enabled later evaluation of successful association. Subjects that acquired this association developed enhanced sound evoked potentials in their primary and secondary visual cortices. Laminar recordings localized this potential to cortical Layers 5 and 6. A similar pattern of activation was elicited by microstimulation of primary auditory cortex in the same subjects, consistent with a cortico-cortical substrate of association. Thus, early sensory cortex has the capability to form neutral stimulus associations. This plasticity may constitute a declarative memory trace between sensory cortices. PMID:24275832

  2. Neural correlates of tactile perception during pre-, peri-, and post-movement.

    PubMed

    Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte

    2016-05-01

    Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.

  3. Electrophysiological indices of surround suppression in humans

    PubMed Central

    Vanegas, M. Isabel; Blangero, Annabelle

    2014-01-01

    Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Furthermore, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. PMID:25411464

  4. Impaired downregulation of visual cortex during auditory processing is associated with autism symptomatology in children and adolescents with autism spectrum disorder.

    PubMed

    Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel

    2017-01-01

    Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  5. Visually cued motor synchronization: modulation of fMRI activation patterns by baseline condition.

    PubMed

    Cerasa, Antonio; Hagberg, Gisela E; Bianciardi, Marta; Sabatini, Umberto

    2005-01-03

    A well-known issue in functional neuroimaging studies, regarding motor synchronization, is to design suitable control tasks able to discriminate between the brain structures involved in primary time-keeper functions and those related to other processes such as attentional effort. The aim of this work was to investigate how the predictability of stimulus onsets in the baseline condition modulates the activity in brain structures related to processes involved in time-keeper functions during the performance of a visually cued motor synchronization task (VM). The rational behind this choice derives from the notion that using different stimulus predictability can vary the subject's attention and the consequently neural activity. For this purpose, baseline levels of BOLD activity were obtained from 12 subjects during a conventional-baseline condition: maintained fixation of the visual rhythmic stimuli presented in the VM task, and a random-baseline condition: maintained fixation of visual stimuli occurring randomly. fMRI analysis demonstrated that while brain areas with a documented role in basic time processing are detected independent of the baseline condition (right cerebellum, bilateral putamen, left thalamus, left superior temporal gyrus, left sensorimotor cortex, left dorsal premotor cortex and supplementary motor area), the ventral premotor cortex, caudate nucleus, insula and inferior frontal gyrus exhibited a baseline-dependent activation. We conclude that maintained fixation of unpredictable visual stimuli can be employed in order to reduce or eliminate neural activity related to attentional components present in the synchronization task.

  6. Perception of Emotion: Differences in Mode of Presentation, Sex of Perceiver, and Race of Expressor.

    ERIC Educational Resources Information Center

    Kozel, Nicholas J.; Gitter, A. George

    A 2 x 2 x 4 factorial design was utilized to investigate the effects of sex of perceiver, race of expressor (Negro and White), and mode of presentation of stimuli (audio and visual, visual only, audio only, and still pictures) on perception of emotion (POE). Perception of seven emotions (anger, happiness, surprise, fear, disgust, pain, and…

  7. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  8. The role of automatic orienting of attention towards ipsilesional stimuli in non-visual (tactile and auditory) neglect: a critical review.

    PubMed

    Gainotti, Guido

    2010-02-01

    The aim of the present survey was to review scientific articles dealing with the non-visual (auditory and tactile) forms of neglect to determine: (a) whether behavioural patterns similar to those observed in the visual modality can also be observed in the non-visual modalities; (b) whether a different severity of neglect can be found in the visual and in the auditory and tactile modalities; (c) the reasons for the possible differences between the visual and non-visual modalities. Data pointing to a contralesional orienting of attention in the auditory and the tactile modalities in visual neglect patients were separately reviewed. Results showed: (a) that in patients with right brain damage manifestations of neglect for the contralesional side of space can be found not only in the visual but also in the auditory and tactile modalities; (b) that the severity of neglect is greater in the visual than in the non-visual modalities. This asymmetry in the severity of neglect across modalities seems due to the greater role that the automatic capture of attention by irrelevant ipsilesional stimuli seems to play in the visual modality. Copyright 2009 Elsevier Srl. All rights reserved.

  9. The Anatomy of Non-conscious Recognition Memory.

    PubMed

    Rosenthal, Clive R; Soto, David

    2016-11-01

    Cortical regions as early as primary visual cortex have been implicated in recognition memory. Here, we outline the challenges that this presents for neurobiological accounts of recognition memory. We conclude that understanding the role of early visual cortex (EVC) in this process will require the use of protocols that mask stimuli from visual awareness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    PubMed

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  11. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  12. Motor cortical encoding of serial order in a context-recall task.

    PubMed

    Carpenter, A F; Georgopoulos, A P; Pellizzer, G

    1999-03-12

    The neural encoding of serial order was studied in the motor cortex of monkeys performing a context-recall memory scanning task. Up to five visual stimuli were presented successively on a circle (list presentation phase), and then one of them (test stimulus) changed color; the monkeys had to make a single motor response toward the stimulus that immediately followed the test stimulus in the list. Correct performance in this task depends on memorization of the serial order of the stimuli during their presentation. It was found that changes in neural activity during the list presentation phase reflected the serial order of the stimuli; the effect on cell activity of the serial order of stimuli during their presentation was at least as strong as the effect of motor direction on cell activity during the execution of the motor response. This establishes the serial order of stimuli in a motor task as an important determinant of motor cortical activity during stimulus presentation and in the absence of changes in peripheral motor events, in contrast to the commonly held view of the motor cortex as just an "upper motor neuron."

  13. Who is afraid of the invisible snake? Subjective visual awareness modulates posterior brain activity for evolutionarily threatening stimuli.

    PubMed

    Grassini, Simone; Holm, Suvi K; Railo, Henry; Koivisto, Mika

    2016-12-01

    Snakes were probably one of the earliest predators of primates, and snake images produce specific behavioral and electrophysiological reactions in humans. Pictures of snakes evoke enhanced activity over the occipital cortex, indexed by the "early posterior negativity" (EPN), as compared with pictures of other dangerous or non-dangerous animals. The present study investigated the possibility that the response to snake images is independent from visual awareness. The observers watched images of threatening and non-threatening animals presented in random order during rapid serial visual presentation. Four different masking conditions were used to manipulate awareness of the images. Electrophysiological results showed that the EPN was larger for snake images than for the other images employed in the unmasked condition. However, the difference disappeared when awareness of the stimuli decreased. Behavioral results on the effects of awareness did not show any advantage for snake images. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  15. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    PubMed

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Visual memories for perceived length are well preserved in older adults.

    PubMed

    Norman, J Farley; Holmin, Jessica S; Bartholomew, Ashley N

    2011-09-15

    Three experiments compared younger (mean age was 23.7years) and older (mean age was 72.1years) observers' ability to visually discriminate line length using both explicit and implicit standard stimuli. In Experiment 1, the method of constant stimuli (with an explicit standard) was used to determine difference thresholds, whereas the method of single stimuli (where the knowledge of the standard length was only implicit and learned from previous test stimuli) was used in Experiments 2 and 3. The study evaluated whether increases in age affect older observers' ability to learn, retain, and utilize effective implicit visual standards. Overall, the observers' length difference thresholds were 5.85% of the standard when the method of constant stimuli was used and improved to 4.39% of the standard for the method of single stimuli (a decrease of 25%). Both age groups performed similarly in all conditions. The results demonstrate that older observers retain the ability to create, remember, and utilize effective implicit standards from a series of visual stimuli. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    PubMed

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  18. Synergistic interaction between baclofen administration into the median raphe nucleus and inconsequential visual stimuli on investigatory behavior of rats

    PubMed Central

    Vollrath-Smith, Fiori R.; Shin, Rick

    2011-01-01

    Rationale Noncontingent administration of amphetamine into the ventral striatum or systemic nicotine increases responses rewarded by inconsequential visual stimuli. When these drugs are contingently administered, rats learn to self-administer them. We recently found that rats self-administer the GABAB receptor agonist baclofen into the median (MR) or dorsal (DR) raphe nuclei. Objectives We examined whether noncontingent administration of baclofen into the MR or DR increases rats’ investigatory behavior rewarded by a flash of light. Results Contingent presentations of a flash of light slightly increased lever presses. Whereas noncontingent administration of baclofen into the MR or DR did not reliably increase lever presses in the absence of visual stimulus reward, the same manipulation markedly increased lever presses rewarded by the visual stimulus. Heightened locomotor activity induced by intraperitoneal injections of amphetamine (3 mg/kg) failed to concur with increased lever pressing for the visual stimulus. These results indicate that the observed enhancement of visual stimulus seeking is distinct from an enhancement of general locomotor activity. Visual stimulus seeking decreased when baclofen was co-administered with the GABAB receptor antagonist, SCH 50911, confirming the involvement of local GABAB receptors. Seeking for visual stimulus also abated when baclofen administration was preceded by intraperitoneal injections of the dopamine antagonist, SCH 23390 (0.025 mg/kg), suggesting enhanced visual stimulus seeking depends on intact dopamine signals. Conclusions Baclofen administration into the MR or DR increased investigatory behavior induced by visual stimuli. Stimulation of GABAB receptors in the MR and DR appears to disinhibit the motivational process involving stimulus–approach responses. PMID:21904820

  19. Frequency-band signatures of visual responses to naturalistic input in ferret primary visual cortex during free viewing.

    PubMed

    Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio

    2015-02-19

    Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. A pool of pairs of related objects (POPORO) for investigating visual semantic integration: behavioral and electrophysiological validation.

    PubMed

    Kovalenko, Lyudmyla Y; Chaumon, Maximilien; Busch, Niko A

    2012-07-01

    Semantic processing of verbal and visual stimuli has been investigated in semantic violation or semantic priming paradigms in which a stimulus is either related or unrelated to a previously established semantic context. A hallmark of semantic priming is the N400 event-related potential (ERP)--a deflection of the ERP that is more negative for semantically unrelated target stimuli. The majority of studies investigating the N400 and semantic integration have used verbal material (words or sentences), and standardized stimulus sets with norms for semantic relatedness have been published for verbal but not for visual material. However, semantic processing of visual objects (as opposed to words) is an important issue in research on visual cognition. In this study, we present a set of 800 pairs of semantically related and unrelated visual objects. The images were rated for semantic relatedness by a sample of 132 participants. Furthermore, we analyzed low-level image properties and matched the two semantic categories according to these features. An ERP study confirmed the suitability of this image set for evoking a robust N400 effect of semantic integration. Additionally, using a general linear modeling approach of single-trial data, we also demonstrate that low-level visual image properties and semantic relatedness are in fact only minimally overlapping. The image set is available for download from the authors' website. We expect that the image set will facilitate studies investigating mechanisms of semantic and contextual processing of visual stimuli.

  1. Inverse target- and cue-priming effects of masked stimuli.

    PubMed

    Mattler, Uwe

    2007-02-01

    The processing of a visual target that follows a briefly presented prime stimulus can be facilitated if prime and target stimuli are similar. In contrast to these positive priming effects, inverse priming effects (or negative compatibility effects) have been found when a mask follows prime stimuli before the target stimulus is presented: Responses are facilitated after dissimilar primes. Previous studies on inverse priming effects examined target-priming effects, which arise when the prime and the target stimuli share features that are critical for the response decision. In contrast, 3 experiments of the present study demonstrate inverse priming effects in a nonmotor cue-priming paradigm. Inverse cue-priming effects exhibited time courses comparable to inverse target-priming effects. Results suggest that inverse priming effects do not arise from specific processes of the response system but follow from operations that are more general.

  2. Manipulating Bodily Presence Affects Cross-Modal Spatial Attention: A Virtual-Reality-Based ERP Study.

    PubMed

    Harjunen, Ville J; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M

    2017-01-01

    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver's body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements.

  3. Manipulating Bodily Presence Affects Cross-Modal Spatial Attention: A Virtual-Reality-Based ERP Study

    PubMed Central

    Harjunen, Ville J.; Ahmed, Imtiaj; Jacucci, Giulio; Ravaja, Niklas; Spapé, Michiel M.

    2017-01-01

    Earlier studies have revealed cross-modal visuo-tactile interactions in endogenous spatial attention. The current research used event-related potentials (ERPs) and virtual reality (VR) to identify how the visual cues of the perceiver’s body affect visuo-tactile interaction in endogenous spatial attention and at what point in time the effect takes place. A bimodal oddball task with lateralized tactile and visual stimuli was presented in two VR conditions, one with and one without visible hands, and one VR-free control with hands in view. Participants were required to silently count one type of stimulus and ignore all other stimuli presented in irrelevant modality or location. The presence of hands was found to modulate early and late components of somatosensory and visual evoked potentials. For sensory-perceptual stages, the presence of virtual or real hands was found to amplify attention-related negativity on the somatosensory N140 and cross-modal interaction in somatosensory and visual P200. For postperceptual stages, an amplified N200 component was obtained in somatosensory and visual evoked potentials, indicating increased response inhibition in response to non-target stimuli. The effect of somatosensory, but not visual, N200 enhanced when the virtual hands were present. The findings suggest that bodily presence affects sustained cross-modal spatial attention between vision and touch and that this effect is specifically present in ERPs related to early- and late-sensory processing, as well as response inhibition, but do not affect later attention and memory-related P3 activity. Finally, the experiments provide commeasurable scenarios for the estimation of the signal and noise ratio to quantify effects related to the use of a head mounted display (HMD). However, despite valid a-priori reasons for fearing signal interference due to a HMD, we observed no significant drop in the robustness of our ERP measurements. PMID:28275346

  4. Cognitive-behavioral and electrophysiological evidence of the affective consequences of ignoring stimulus representations in working memory.

    PubMed

    De Vito, David; Ferrey, Anne E; Fenske, Mark J; Al-Aidroos, Naseem

    2018-06-01

    Ignoring visual stimuli in the external environment leads to decreased liking of those items, a phenomenon attributed to the affective consequences of attentional inhibition. Here we investigated the generality of this "distractor devaluation" phenomenon by asking whether ignoring stimuli represented internally within visual working memory has the same affective consequences. In two experiments we presented participants with two or three visual stimuli and then, after the stimuli were no longer visible, provided an attentional cue indicating which item in memory was the target they would have to later recall, and which were task-irrelevant distractors. Participants subsequently judged how much they liked these stimuli. Previously-ignored distractors were consistently rated less favorably than targets, replicating prior findings of distractor devaluation. To gain converging evidence, in Experiment 2, we also examined the electrophysiological processes associated with devaluation by measuring individual differences in attention (N2pc) and working memory (CDA) event-related potentials following the attention cue. Larger amplitude of an N2pc-like component was associated with greater devaluation, suggesting that individuals displaying more effective selection of memory targets-an act aided by distractor inhibition-displayed greater levels of distractor devaluation. Individuals showing a larger post-cue CDA amplitude (but not pre-cue CDA amplitude) also showed greater distractor devaluation, supporting prior evidence that visual working-memory resources have a functional role in effecting devaluation. Together, these findings demonstrate that ignoring working-memory representations has affective consequences, and adds to the growing evidence that the contribution of selective-attention mechanisms to a wide range of human thoughts and behaviors leads to devaluation.

  5. A comparative study of visual reaction time in table tennis players and healthy controls.

    PubMed

    Bhabhor, Mahesh K; Vidja, Kalpesh; Bhanderi, Priti; Dodhia, Shital; Kathrotia, Rajesh; Joshi, Varsha

    2013-01-01

    Visual reaction time is time required to response to visual stimuli. The present study was conducted to measure visual reaction time in 209 subjects, 50 table tennis (TT) players and 159 healthy controls. The visual reaction time was measured by the direct RT computerized software in healthy controls and table tennis players. Simple visual reaction time was measured. During the reaction time testing, visual stimuli were given for eighteen times and average reaction time was taken as the final reaction time. The study shows that table tennis players had faster reaction time than healthy controls. On multivariate analysis, it was found that TT players had 74.121 sec (95% CI 98.8 and 49.4 sec) faster reaction time compared to non-TT players of same age and BMI. Also playing TT has a profound influence on visual reaction time than BMI. Our study concluded that persons involved in sports are having good reaction time as compared to controls. These results support the view that playing of table tennis is beneficial to eye-hand reaction time, improve the concentration and alertness.

  6. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    PubMed Central

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  7. The company they keep: Background similarity influences transfer of aftereffects from second- to first-order stimuli

    PubMed Central

    Qian, Ning; Dayan, Peter

    2013-01-01

    A wealth of studies has found that adapting to second-order visual stimuli has little effect on the perception of first-order stimuli. This is physiologically and psychologically troubling, since many cells show similar tuning to both classes of stimuli, and since adapting to first-order stimuli leads to aftereffects that do generalize to second-order stimuli. Focusing on high-level visual stimuli, we recently proposed the novel explanation that the lack of transfer arises partially from the characteristically different backgrounds of the two stimulus classes. Here, we consider the effect of stimulus backgrounds in the far more prevalent, lower-level, case of the orientation tilt aftereffect. Using a variety of first- and second-order oriented stimuli, we show that we could increase or decrease both within- and cross-class adaptation aftereffects by increasing or decreasing the similarity of the otherwise apparently uninteresting or irrelevant backgrounds of adapting and test patterns. Our results suggest that similarity between background statistics of the adapting and test stimuli contributes to low-level visual adaptation, and that these backgrounds are thus not discarded by visual processing but provide contextual modulation of adaptation. Null cross-adaptation aftereffects must also be interpreted cautiously. These findings reduce the apparent inconsistency between psychophysical and neurophysiological data about first- and second-order stimuli. PMID:23732217

  8. Touch to see: neuropsychological evidence of a sensory mirror system for touch.

    PubMed

    Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo

    2012-09-01

    The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.

  9. Can Blindsight Be Superior to "Sighted-Sight"?

    ERIC Educational Resources Information Center

    Trevethan, Ceri T.; Sahraie, Arash; Weiskrantz, Larry

    2007-01-01

    DB, the first blindsight case to be tested extensively (Weiskrantz, 1986) has demonstrated the ability to detect and discriminate a range of visual stimuli presented within his perimetrically blind visual field defect. In a temporal two alternative forced choice (2AFC) detection experiment we have investigated the limits of DB's detection ability…

  10. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  11. Enhanced Access to Early Visual Processing of Perceptual Simultaneity in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Falter, Christine M.; Braeutigam, Sven; Nathan, Roger; Carrington, Sarah; Bailey, Anthony J.

    2013-01-01

    We compared judgements of the simultaneity or asynchrony of visual stimuli in individuals with autism spectrum disorders (ASD) and typically-developing controls using Magnetoencephalography (MEG). Two vertical bars were presented simultaneously or non-simultaneously with two different stimulus onset delays. Participants with ASD distinguished…

  12. Visual-Auditory Integration during Speech Imitation in Autism

    ERIC Educational Resources Information Center

    Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…

  13. Gender differences in emotion recognition: Impact of sensory modality and emotional category.

    PubMed

    Lambrecht, Lena; Kreifelts, Benjamin; Wildgruber, Dirk

    2014-04-01

    Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose "alluring" more often when a stimulus was presented by a woman as compared to a man.

  14. Phonological processing of ignored distractor pictures, an fMRI investigation.

    PubMed

    Bles, Mart; Jansma, Bernadette M

    2008-02-11

    Neuroimaging studies of attention often focus on interactions between stimulus representations and top-down selection mechanisms in visual cortex. Less is known about the neural representation of distractor stimuli beyond visual areas, and the interactions between stimuli in linguistic processing areas. In the present study, participants viewed simultaneously presented line drawings at peripheral locations, while in the MRI scanner. The names of the objects depicted in these pictures were either phonologically related (i.e. shared the same consonant-vowel onset construction), or unrelated. Attention was directed either at the linguistic properties of one of these pictures, or at the fixation point (i.e. away from the pictures). Phonological representations of unattended pictures could be detected in the posterior superior temporal gyrus, the inferior frontal gyrus, and the insula. Under some circumstances, the name of ignored distractor pictures is retrieved by linguistic areas. This implies that selective attention to a specific location does not completely filter out the representations of distractor stimuli at early perceptual stages.

  15. Don't words come easy? A psychophysical exploration of word superiority

    PubMed Central

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe

    2013-01-01

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, to explore the extents and limits of the WSE. Using a carefully controlled list of three letter words, we show that a WSE can be revealed in vocal reaction times even to undegraded stimuli. With a novel combination of psychophysics and mathematical modeling, we further show that the typical WSE is specifically reflected in perceptual processing speed: single words are simply processed faster than single letters. Intriguingly, when multiple stimuli are presented simultaneously, letters are perceived more easily than words, and this is reflected both in perceptual processing speed and visual short term memory (VSTM) capacity. So, even if single words come easy, there is a limit to the WSE. PMID:24027510

  16. The sense of agency is action-effect causality perception based on cross-modal grouping.

    PubMed

    Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya

    2013-07-22

    Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action-effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action-effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes.

  17. The sense of agency is action–effect causality perception based on cross-modal grouping

    PubMed Central

    Kawabe, Takahiro; Roseboom, Warrick; Nishida, Shin'ya

    2013-01-01

    Sense of agency, the experience of controlling external events through one's actions, stems from contiguity between action- and effect-related signals. Here we show that human observers link their action- and effect-related signals using a computational principle common to cross-modal sensory grouping. We first report that the detection of a delay between tactile and visual stimuli is enhanced when both stimuli are synchronized with separate auditory stimuli (experiment 1). This occurs because the synchronized auditory stimuli hinder the potential grouping between tactile and visual stimuli. We subsequently demonstrate an analogous effect on observers' key press as an action and a sensory event. This change is associated with a modulation in sense of agency; namely, sense of agency, as evaluated by apparent compressions of action–effect intervals (intentional binding) or subjective causality ratings, is impaired when both participant's action and its putative visual effect events are synchronized with auditory tones (experiments 2 and 3). Moreover, a similar role of action–effect grouping in determining sense of agency is demonstrated when the additional signal is presented in the modality identical to an effect event (experiment 4). These results are consistent with the view that sense of agency is the result of general processes of causal perception and that cross-modal grouping plays a central role in these processes. PMID:23740784

  18. Bank of Standardized Stimuli (BOSS) phase II: 930 new normative photos.

    PubMed

    Brodeur, Mathieu B; Guérard, Katherine; Bouras, Maria

    2014-01-01

    Researchers have only recently started to take advantage of the developments in technology and communication for sharing data and documents. However, the exchange of experimental material has not taken advantage of this progress yet. In order to facilitate access to experimental material, the Bank of Standardized Stimuli (BOSS) project was created as a free standardized set of visual stimuli accessible to all researchers, through a normative database. The BOSS is currently the largest existing photo bank providing norms for more than 15 dimensions (e.g. familiarity, visual complexity, manipulability, etc.), making the BOSS an extremely useful research tool and a mean to homogenize scientific data worldwide. The first phase of the BOSS was completed in 2010, and contained 538 normative photos. The second phase of the BOSS project presented in this article, builds on the previous phase by adding 930 new normative photo stimuli. New categories of concepts were introduced, including animals, building infrastructures, body parts, and vehicles and the number of photos in other categories was increased. All new photos of the BOSS were normalized relative to their name, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. The availability of these norms is a precious asset that should be considered for characterizing the stimuli as a function of the requirements of research and for controlling for potential confounding effects.

  19. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  20. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    PubMed

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  1. Selective attention to visual compound stimuli in squirrel monkeys (Saimiri sciureus).

    PubMed

    Ploog, Bertram O

    2011-05-01

    Five squirrel monkeys served under a simultaneous discrimination paradigm with visual compound stimuli that allowed measurement of excitatory and inhibitory control exerted by individual stimulus components (form and luminance/"color"), which could not be presented in isolation (i.e., form could not be presented without color). After performance exceeded a criterion of 75% correct during training, unreinforced test trials with stimuli comprising recombined training stimulus components were interspersed while the overall reinforcement rate remained constant for training and testing. The training-testing series was then repeated with reversed reinforcement contingencies. The findings were that color acquired greater excitatory control than form under the original condition, that no such difference was found for the reversal condition or for inhibitory control under either condition, and that overall inhibitory control was less pronounced than excitatory control. The remarkably accurate performance throughout suggested that a forced 4-s delay between the stimulus presentation and the opportunity to respond was effective in reducing "impulsive" responding, which has implications for suppressing impulsive responding in children with autism and with attention deficit disorder. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Extracting alpha band modulation during visual spatial attention without flickering stimuli using common spatial pattern.

    PubMed

    Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka

    2008-01-01

    In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.

  3. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials.

    PubMed

    Russo, N; Mottron, L; Burack, J A; Jemel, B

    2012-07-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    PubMed

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  5. Effects of Response Task and Accessory Stimuli on Redundancy Gain: Tests of the Hemispheric Coactivation Model

    ERIC Educational Resources Information Center

    Miller, Jeff; Van Nes, Fenna

    2007-01-01

    Two experiments tested predictions of the hemispheric coactivation model for redundancy gain (J. O. Miller, 2004). Simple reaction time was measured in divided attention tasks with visual stimuli presented to the left or right of fixation or redundantly to both sides. Experiment 1 tested the prediction that redundancy gain--the decrease in…

  6. Parameters of Semantic Multisensory Integration Depend on Timing and Modality Order among People on the Autism Spectrum: Evidence from Event-Related Potentials

    ERIC Educational Resources Information Center

    Russo, N.; Mottron, L.; Burack, J. A.; Jemel, B.

    2012-01-01

    Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model;…

  7. Does a Sensory Processing Deficit Explain Counting Accuracy on Rapid Visual Sequencing Tasks in Adults with and without Dyslexia?

    ERIC Educational Resources Information Center

    Conlon, Elizabeth G.; Wright, Craig M.; Norris, Karla; Chekaluk, Eugene

    2011-01-01

    The experiments conducted aimed to investigate whether reduced accuracy when counting stimuli presented in rapid temporal sequence in adults with dyslexia could be explained by a sensory processing deficit, a general slowing in processing speed or difficulties shifting attention between stimuli. To achieve these aims, the influence of the…

  8. Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening

    PubMed Central

    Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus

    2012-01-01

    Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105

  9. Temporal allocation of attention toward threat in individuals with posttraumatic stress symptoms.

    PubMed

    Amir, Nader; Taylor, Charles T; Bomyea, Jessica A; Badour, Christal L

    2009-12-01

    Research suggests that individuals with posttraumatic stress disorder (PTSD) selectively attend to threat-relevant information. However, little is known about how initial detection of threat influences the processing of subsequently encountered stimuli. To address this issue, we used a rapid serial visual presentation paradigm (RSVP; Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18, 849-860) to examine temporal allocation of attention to threat-related and neutral stimuli in individuals with PTSD symptoms (PTS), traumatized individuals without PTSD symptoms (TC), and non-anxious controls (NAC). Participants were asked to identify one or two targets in an RSVP stream. Typically processing of the first target decreases accuracy of identifying the second target as a function of the temporal lag between targets. Results revealed that the PTS group was significantly more accurate in detecting a neutral target when it was presented 300 or 500ms after threat-related stimuli compared to when the target followed neutral stimuli. These results suggest that individuals with PTSD may process trauma-relevant information more rapidly and efficiently than benign information.

  10. Global shape information increases but color information decreases the composite face effect.

    PubMed

    Retter, Talia L; Rossion, Bruno

    2015-01-01

    The separation of visual shape and surface information may be useful for understanding holistic face perception--that is, the perception of a face as a single unit (Jiang, Blanz, & Rossion, 2011, Visual Cognition, 19, 1003-1034). A widely used measure of holistic face perception is the composite face effect (CFE), in which identical top face halves appear different when aligned with bottom face halves from different identities. In the present study the influences of global face shape (ie contour of the face) and color information on the CFE are investigated, with the hypothesis that global face shape supports but color impairs holistic face perception as measured in this paradigm. In experiment 1 the CFE is significantly increased when face stimuli possess natural global shape information than when cropped to a generic (ie oval) global shape; this effect is not found when the stimuli are presented inverted. In experiment 2 the CFE is significantly decreased when face stimuli are presented with color information than when presented in grayscale. These findings indicate that grayscale stimuli maintaining natural global face shape information provide the most adept measure of holistic face perception in the behavioral composite face paradigm. More generally, they show that reducing different types of information diagnostic for individual face perception can have opposite effects on the CFE, illustrating the functional dissociation between shape and surface information in face perception.

  11. [Sound improves distinction of low intensities of light in the visual cortex of a rabbit].

    PubMed

    Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V

    2011-01-01

    Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.

  12. [Neurophysiological correlates of learning disabilities in Japan].

    PubMed

    Miyao, M

    1999-05-01

    In the present study, we developed a new event-related potentials (ERPs) stimulator system applicable to simultaneous audio visual stimuli, and tested it clinically on healthy adults and patients with learning disabilities (LD), using Japanese language task stimuli: hiragana letters, kanji letters, and kanji letters with spoken words. (1) The origins of the P300 component were identified in these tasks. The sources in the former two tasks were located in different areas. In the simultaneous task stimuli, a combination of the two P300 sources was observed with dominance in the left posterior inferior temporal area. (2) In patients with learning disabilities, those with reading and writing disability showed low amplitudes in the left hemisphere in response to visual language task stimuli with kanji and hiragana letters, in contrast to healthy children and LD patients with arithmetic disability. (3) To evaluate the effect of methylphenidate (10 mg) on ADD, paired-associate ERPs were recorded. Methylphenidate increased the amplitude of P300.

  13. Premotor activations in response to visually presented single letters depend on the hand used to write: a study on left-handers.

    PubMed

    Longcamp, Marieke; Anton, Jean-Luc; Roth, Muriel; Velay, Jean-Luc

    2005-01-01

    In a previous fMRI study on right-handers (Rhrs), we reported that part of the left ventral premotor cortex (BA6) was activated when alphabetical characters were passively observed and that the same region was also involved in handwriting [Longcamp, M., Anton, J. L., Roth, M., & Velay, J. L. (2003). Visual presentation of single letters activates a premotor area involved in writing. NeuroImage, 19, 1492-1500]. We therefore suggested that letter-viewing may induce automatic involvement of handwriting movements. In the present study, in order to confirm this hypothesis, we carried out a similar fMRI experiment on a group of left-handed subjects (Lhrs). We reasoned that if the above assumption was correct, visual perception of letters by Lhrs might automatically activate cortical motor areas coding for left-handed writing movements, i.e., areas located in the right hemisphere. The visual stimuli used here were either single letters, single pseudoletters, or a control stimulus. The subjects were asked to watch these stimuli attentively, and no response was required. The results showed that a ventral premotor cortical area (BA6) in the right hemisphere was specifically activated when Lhrs looked at letters and not at pseudoletters. This right area was symmetrically located with respect to the left one activated under the same circumstances in Rhrs. This finding supports the hypothesis that visual perception of written language evokes covert motor processes. In addition, a bilateral area, also located in the premotor cortex (BA6), but more ventrally and medially, was found to be activated in response to both letters and pseudoletters. This premotor region, which was not activated correspondingly in Rhrs, might be involved in the processing of graphic stimuli, whatever their degree of familiarity.

  14. Understanding the allocation of attention when faced with varying perceptual load in partial report: a computational approach.

    PubMed

    Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry

    2011-05-01

    The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. EyeMusic: Introducing a "visual" colorful experience for the blind using auditory sensory substitution.

    PubMed

    Abboud, Sami; Hanassy, Shlomi; Levy-Tzedek, Shelly; Maidenbaum, Shachar; Amedi, Amir

    2014-01-01

    Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.

  16. Matching voice and face identity from static images.

    PubMed

    Mavica, Lauren W; Barenholtz, Elan

    2013-04-01

    Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model's faces and voices along multiple "physical" dimensions (e.g., weight,) or "personality" dimensions (e.g., extroversion); the degree of agreement between the ratings for each model's face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.

  17. Emotion Modulation of Visual Attention: Categorical and Temporal Characteristics

    PubMed Central

    Ciesielski, Bethany G.; Armstrong, Thomas; Zald, David H.; Olatunji, Bunmi O.

    2010-01-01

    Background Experimental research has shown that emotional stimuli can either enhance or impair attentional performance. However, the relative effects of specific emotional stimuli and the specific time course of these differential effects are unclear. Methodology/Principal Findings In the present study, participants (n = 50) searched for a single target within a rapid serial visual presentation of images. Irrelevant fear, disgust, erotic or neutral images preceded the target by two, four, six, or eight items. At lag 2, erotic images induced the greatest deficits in subsequent target processing compared to other images, consistent with a large emotional attentional blink. Fear and disgust images also produced a larger attentional blinks at lag 2 than neutral images. Erotic, fear, and disgust images continued to induce greater deficits than neutral images at lag 4 and 6. However, target processing deficits induced by erotic, fear, and disgust images at intermediate lags (lag 4 and 6) did not consistently differ from each other. In contrast to performance at lag 2, 4, and 6, enhancement in target processing for emotional stimuli was observed in comparison to neutral stimuli at lag 8. Conclusions/Significance These findings suggest that task-irrelevant emotion information, particularly erotica, impairs intentional allocation of attention at early temporal stages, but at later temporal stages, emotional stimuli can have an enhancing effect on directed attention. These data suggest that the effects of emotional stimuli on attention can be both positive and negative depending upon temporal factors. PMID:21079773

  18. Sleep-waking cycle in the cerveau isolé cat.

    PubMed

    Slósarska, M; Zernicki, B

    1973-06-01

    The experiments were performed on ten chronic low cerveau isolé cats: in eight cats the brain stem transection was prepontine and in two cats, intercollicular. The preparations survived from 24 to 3 days. During 24-36 hr sessions the ECoG activity was continuously recorded, and the ocular and ECoG components of the orienting reflexes to visual and olfactory stimuli were studied. 2. Three periods can be recognized in the recovery process of the low cerveau isolé cat. They are called acute, early chronic and late chronic stages. The acute stage lasts 1 day and the early chronic stage seems to last 3 weeks at least. During the acute stage the ability to desynchronize the EEG, either spontaneously or in response to sensory stimulations, is dramatically impaired and the pupils are fissurated. Thus the cat is comatous. 4. During the early chronic stage, although the ECoG synchronization-desynchronization cycle and the associated fissurated myosis-myosis cycle already exist, the episodes of ECoG desynchronization occupy only a small percentage of time and usually develop slowly. Visual and olfactory stimuli are often ineffective. Thus the cat is semicomatous. In the late chronic stage the sleep-waking cycle is present. The animal can be easily awakened by visual and olfactory stimuli. The intensity of the ECoG arousal to visual stimuli and the distribution of time between alert wakefulness, drowsiness, light synchronized sleep and deep synchronized sleep are similar to those in the chronic pretrigeminal cat. The recovery of the cerveau isolé seems to reach a steady level when the sleep-waking cycle becomes similar to that present in the chronic pretrigeminal cat. During the whole survival period the vertical following reflex is abortive.

  19. The flanker compatibility effect as a function of visual angle, attentional focus, visual transients, and perceptual load: a search for boundary conditions.

    PubMed

    Miller, J

    1991-03-01

    When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

  20. Multisensory Integration in Non-Human Primates during a Sensory-Motor Task

    PubMed Central

    Lanz, Florian; Moret, Véronique; Rouiller, Eric Michel; Loquet, Gérard

    2013-01-01

    Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear. PMID:24319421

  1. Emotion and attention interaction studied through event-related potentials.

    PubMed

    Carretié, L; Martín-Loeches, M; Hinojosa, J A; Mercado, F

    2001-11-15

    Several studies on hemodynamic brain activity indicate that emotional visual stimuli elicit greater activation than neutral stimuli in attention-related areas such as the anterior cingulate cortex (ACC) and the visual association cortex (VAC). In order to explore the temporo-spatial characteristics of the interaction between attention and emotion, two processes characterized by involving short and rapid phases, event-related potentials (ERPs) were measured in 29 subjects using a 60-electrode array and the LORETA source localization software. A cue/target paradigm was employed in order to investigate both expectancy-related and input processing-related attention. Four categories of stimuli were presented to subjects: positive arousing, negative arousing, relaxing, and neutral. Three attention-related components were finally analyzed: N280pre (from pretarget ERPs), P200post and P340post (both from posttarget ERPs). N280pre had a prefrontal focus (ACC and/or medial prefrontal cortex) and presented significantly lower amplitudes in response to cues announcing negative targets. This result suggests a greater capacity of nonaversive stimuli to generate expectancy-related attention. P200post and P340post were both elicited in the VAC, and showed their highest amplitudes in response to negative- and to positive-arousing stimuli, respectively. The origin of P200post appears to be located dorsally with respect to the clear ventral-stream origin of P340post. The conjunction of temporal and spatial characteristics of P200post and P340post leads to the deduction that input processing-related attention associated with emotional visual stimulation involves an initial, rapid, and brief "early" attentional response oriented to rapid motor action, being more prominent towards negative stimulation. This is followed by a slower but longer "late" attentional response oriented to deeper processing, elicited to a greater extent by appetitive stimulation.

  2. A working memory bias for alcohol-related stimuli depends on drinking score.

    PubMed

    Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry

    2013-03-01

    We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  3. Color and luminance increment thresholds in poor readers.

    PubMed

    Dain, Stephen J; Floyd, Richard A; Elliot, Robert T

    2008-01-01

    The hypotheses of a visual basis to reading disabilities in some children have centered around deficits in the visual processes displaying more transient responses to stimuli although hyperactivity in the visual processes displaying sustained responses to stimuli has also been proposed as a mechanism. In addition, there is clear evidence that colored lenses and/or colored overlays and/or colored backgrounds can influence performance in reading and/or may assist in providing comfortable vision for reading and, as a consequence, the ability to maintain reading for longer. As a consequence, it is surprising that the color vision of poor readers is relatively little studied. We assessed luminance increment thresholds and equi-luminous red-green and blue-yellow increment thresholds using a computer based test in central vision and at 10 degrees nasally employing the paradigm pioneered by King-Smith. We examined 35 poor readers (based on the Neale Analysis of Reading) and compared their performance with 35 normal readers matched for age and IQ. Poor readers produced similar luminance contrast thresholds for both foveal and peripheral presentation compared with normals. Similarly, chromatic contrast discrimination for the red/green stimuli was the same in normal and poor readers. However, poor readers had significantly lower thresholds/higher sensitivity for the blue/yellow stimuli, for both foveal and peripheral presentation, compared with normal readers. This hypersensitivity in blue-yellow discrimination may point to why colored lenses and overlays are often found to be effective in assisting many poor readers.

  4. Dissociating verbal and nonverbal audiovisual object processing.

    PubMed

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  5. Real-world spatial regularities affect visual working memory for objects.

    PubMed

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V

    2015-12-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.

  6. Hunger and satiety in anorexia nervosa: fMRI during cognitive processing of food pictures.

    PubMed

    Santel, Stephanie; Baving, Lioba; Krauel, Kerstin; Münte, Thomas F; Rotte, Michael

    2006-10-09

    Neuroimaging studies of visually presented food stimuli in patients with anorexia nervosa have demonstrated decreased activations in inferior parietal and visual occipital areas, and increased frontal activations relative to healthy persons, but so far no inferences could be drawn with respect to the influence of hunger or satiety. Thirteen patients with AN and 10 healthy control subjects (aged 13-21) rated visual food and non-food stimuli for pleasantness during functional magnetic resonance imaging (fMRI) in a hungry and a satiated state. AN patients rated food as less pleasant than controls. When satiated, AN patients showed decreased activation in left inferior parietal cortex relative to controls. When hungry, AN patients displayed weaker activation of the right visual occipital cortex than healthy controls. Food stimuli during satiety compared with hunger were associated with stronger right occipital activation in patients and with stronger activation in left lateral orbitofrontal cortex, the middle portion of the right anterior cingulate, and left middle temporal gyrus in controls. The observed group differences in the fMRI activation to food pictures point to decreased food-related somatosensory processing in AN during satiety and to attentional mechanisms during hunger that might facilitate restricted eating in AN.

  7. Effects of feature-selective and spatial attention at different stages of visual processing.

    PubMed

    Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M

    2011-01-01

    We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.

  8. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain.

    PubMed

    Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan

    2015-01-16

    We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.

  9. The time course of emotional picture processing: an event-related potential study using a rapid serial visual presentation paradigm

    PubMed Central

    Zhu, Chuanlin; He, Weiqi; Qi, Zhengyang; Wang, Lili; Song, Dongqing; Zhan, Lei; Yi, Shengnan; Luo, Yuejia; Luo, Wenbo

    2015-01-01

    The present study recorded event-related potentials using rapid serial visual presentation paradigm to explore the time course of emotionally charged pictures. Participants completed a dual-target task as quickly and accurately as possible, in which they were asked to judge the gender of the person depicted (task 1) and the valence (positive, neutral, or negative) of the given picture (task 2). The results showed that the amplitudes of the P2 component were larger for emotional pictures than they were for neutral pictures, and this finding represents brain processes that distinguish emotional stimuli from non-emotional stimuli. Furthermore, positive, neutral, and negative pictures elicited late positive potentials with different amplitudes, implying that the differences between emotions are recognized. Additionally, the time course for emotional picture processing was consistent with the latter two stages of a three-stage model derived from studies on emotional facial expression processing and emotional adjective processing. The results of the present study indicate that in the three-stage model of emotion processing, the middle and late stages are more universal and stable, and thus occur at similar time points when using different stimuli (faces, words, or scenes). PMID:26217276

  10. BOLDSync: a MATLAB-based toolbox for synchronized stimulus presentation in functional MRI.

    PubMed

    Joshi, Jitesh; Saharan, Sumiti; Mandal, Pravat K

    2014-02-15

    Precise and synchronized presentation of paradigm stimuli in functional magnetic resonance imaging (fMRI) is central to obtaining accurate information about brain regions involved in a specific task. In this manuscript, we present a new MATLAB-based toolbox, BOLDSync, for synchronized stimulus presentation in fMRI. BOLDSync provides a user friendly platform for design and presentation of visual, audio, as well as multimodal audio-visual (AV) stimuli in functional imaging experiments. We present simulation experiments that demonstrate the millisecond synchronization accuracy of BOLDSync, and also illustrate the functionalities of BOLDSync through application to an AV fMRI study. BOLDSync gains an advantage over other available proprietary and open-source toolboxes by offering a user friendly and accessible interface that affords both precision in stimulus presentation and versatility across various types of stimulus designs and system setups. BOLDSync is a reliable, efficient, and versatile solution for synchronized stimulus presentation in fMRI study. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Emotional facilitation of sensory processing in the visual cortex.

    PubMed

    Schupp, Harald T; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2003-01-01

    A key function of emotion is the preparation for action. However, organization of successful behavioral strategies depends on efficient stimulus encoding. The present study tested the hypothesis that perceptual encoding in the visual cortex is modulated by the emotional significance of visual stimuli. Event-related brain potentials were measured while subjects viewed pleasant, neutral, and unpleasant pictures. Early selective encoding of pleasant and unpleasant images was associated with a posterior negativity, indicating primary sources of activation in the visual cortex. The study also replicated previous findings in that affective cues also elicited enlarged late positive potentials, indexing increased stimulus relevance at higher-order stages of stimulus processing. These results support the hypothesis that sensory encoding of affective stimuli is facilitated implicitly by natural selective attention. Thus, the affect system not only modulates motor output (i.e., favoring approach or avoidance dispositions), but already operates at an early level of sensory encoding.

  12. Behind the scenes: how visual memory load biases selective attention during processing of visual streams.

    PubMed

    Klaver, Peter; Talsma, Durk

    2013-11-01

    We recorded ERPs to investigate whether the visual memory load can bias visual selective attention. Participants memorized one or four letters and then responded to memory-matching letters presented in a relevant color while ignoring distractor letters or letters in an irrelevant color. Stimuli in the relevant color elicited larger frontal selection positivities (FSP) and occipital selection negativities (OSN) compared to irrelevant color stimuli. Only distractors elicited a larger FSP in the high than in the low memory load task. Memory load prolonged the OSN for all letters. Response mapping complexity was also modulated but did not affect the FSP and OSN. Together, the FSP data suggest that high memory load increased distractability. The OSN data suggest that memory load sustained attention to letters in a relevant color until working memory processing was completed, independently of whether the letters were in working memory or not. Copyright © 2013 Society for Psychophysiological Research.

  13. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    PubMed Central

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  14. Sensitivity of the Autonomic Nervous System to Visual and Auditory Affect Across Social and Non-Social Domains in Williams Syndrome

    PubMed Central

    Järvinen, Anna; Dering, Benjamin; Neumann, Dirk; Ng, Rowena; Crivelli, Davide; Grichanik, Mark; Korenberg, Julie R.; Bellugi, Ursula

    2012-01-01

    Although individuals with Williams syndrome (WS) typically demonstrate an increased appetitive social drive, their social profile is characterized by dissociations, including socially fearless behavior coupled with anxiousness, and distinct patterns of “peaks and valleys” of ability. The aim of this study was to compare the processing of social and non-social visually and aurally presented affective stimuli, at the levels of behavior and autonomic nervous system (ANS) responsivity, in individuals with WS contrasted with a typically developing (TD) group, with the view of elucidating the highly sociable and emotionally sensitive predisposition noted in WS. Behavioral findings supported previous studies of enhanced competence in processing social over non-social stimuli by individuals with WS; however, the patterns of ANS functioning underlying the behavioral performance revealed a surprising profile previously undocumented in WS. Specifically, increased heart rate (HR) reactivity, and a failure for electrodermal activity to habituate were found in individuals with WS contrasted with the TD group, predominantly in response to visual social affective stimuli. Within the auditory domain, greater arousal linked to variation in heart beat period was observed in relation to music stimuli in individuals with WS. Taken together, the findings suggest that the pattern of ANS response in WS is more complex than previously noted, with increased arousal to face and music stimuli potentially underpinning the heightened behavioral emotionality to such stimuli. The lack of habituation may underlie the increased affiliation and attraction to faces characterizing individuals with WS. Future research directions are suggested. PMID:23049519

  15. Spontaneous generalization of abstract multimodal patterns in young domestic chicks.

    PubMed

    Versace, Elisabetta; Spierings, Michelle J; Caffini, Matteo; Ten Cate, Carel; Vallortigara, Giorgio

    2017-05-01

    From the early stages of life, learning the regularities associated with specific objects is crucial for making sense of experiences. Through filial imprinting, young precocial birds quickly learn the features of their social partners by mere exposure. It is not clear though to what extent chicks can extract abstract patterns of the visual and acoustic stimuli present in the imprinting object, and how they combine them. To investigate this issue, we exposed chicks (Gallus gallus) to three days of visual and acoustic imprinting, using either patterns with two identical items or patterns with two different items, presented visually, acoustically or in both modalities. Next, chicks were given a choice between the familiar and the unfamiliar pattern, present in either the multimodal, visual or acoustic modality. The responses to the novel stimuli were affected by their imprinting experience, and the effect was stronger for chicks imprinted with multimodal patterns than for the other groups. Interestingly, males and females adopted a different strategy, with males more attracted by unfamiliar patterns and females more attracted by familiar patterns. Our data show that chicks can generalize abstract patterns by mere exposure through filial imprinting and that multimodal stimulation is more effective than unimodal stimulation for pattern learning.

  16. Visual contribution to the multistable perception of speech.

    PubMed

    Sato, Marc; Basirat, Anahita; Schwartz, Jean-Luc

    2007-11-01

    The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.

  17. The effect of non-visual working memory load on top-down modulation of visual processing

    PubMed Central

    Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark

    2009-01-01

    While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially-ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-sec delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley et al., (2005) Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism. PMID:19397858

  18. Visual field asymmetries in visual evoked responses

    PubMed Central

    Hagler, Donald J.

    2014-01-01

    Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151

  19. Involuntary Capture and Voluntary Reorienting of Attention Decline in Middle-Aged and Old Participants

    PubMed Central

    Correa-Jaraba, Kenia S.; Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando

    2016-01-01

    The main aim of this study was to examine the effects of aging on event-related brain potentials (ERPs) associated with the automatic detection of unattended infrequent deviant and novel auditory stimuli (Mismatch Negativity, MMN) and with the orienting to these stimuli (P3a component), as well as the effects on ERPs associated with reorienting to relevant visual stimuli (Reorienting Negativity, RON). Participants were divided into three age groups: (1) Young: 21–29 years old; (2) Middle-aged: 51–64 years old; and (3) Old: 65–84 years old. They performed an auditory-visual distraction-attention task in which they were asked to attend to visual stimuli (Go, NoGo) and to ignore auditory stimuli (S: standard, D: deviant, N: novel). Reaction times (RTs) to Go visual stimuli were longer in old and middle-aged than in young participants. In addition, in all three age groups, longer RTs were found when Go visual stimuli were preceded by novel relative to deviant and standard auditory stimuli, indicating a distraction effect provoked by novel stimuli. ERP components were identified in the Novel minus Standard (N-S) and Deviant minus Standard (D-S) difference waveforms. In the N-S condition, MMN latency was significantly longer in middle-aged and old participants than in young participants, indicating a slowing of automatic detection of changes. The following results were observed in both difference waveforms: (1) the P3a component comprised two consecutive phases in all three age groups—an early-P3a (e-P3a) that may reflect the orienting response toward the irrelevant stimulation and a late-P3a (l-P3a) that may be a correlate of subsequent evaluation of the infrequent unexpected novel or deviant stimuli; (2) the e-P3a, l-P3a, and RON latencies were significantly longer in the Middle-aged and Old groups than in the Young group, indicating delay in the orienting response to and the subsequent evaluation of unattended auditory stimuli, and in the reorienting of attention to relevant (Go) visual stimuli, respectively; and (3) a significantly smaller e-P3a amplitude in Middle-aged and Old groups, indicating a deficit in the orienting response to irrelevant novel and deviant auditory stimuli. PMID:27065004

  20. Residual attention guidance in blindsight monkeys watching complex natural scenes.

    PubMed

    Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi

    2012-08-07

    Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Repetition Priming within and between the Two Cerebral Hemispheres

    ERIC Educational Resources Information Center

    Weems, S.A.; Zaidel, E.

    2005-01-01

    Two experiments explored repetition priming benefits in the left and right cerebral hemispheres. In both experiments, a lateralized lexical decision task was employed using repeated target stimuli. In the first experiment, all targets were repeated in the same visual field, and in the second experiment the visual field of presentation was switched…

  2. Visual Processing of Verbal and Nonverbal Stimuli in Adolescents with Reading Disabilities.

    ERIC Educational Resources Information Center

    Boden, Catherine; Brodeur, Darlene A.

    1999-01-01

    A study investigated whether 32 adolescents with reading disabilities (RD) were slower at processing visual information compared to children of comparable age and reading level, or whether their deficit was specific to the written word. Adolescents with RD demonstrated difficulties in processing rapidly presented verbal and nonverbal visual…

  3. Conditioned suppression, punishment, and aversion

    NASA Technical Reports Server (NTRS)

    Orme-Johnson, D. W.; Yarczower, M.

    1974-01-01

    The aversive action of visual stimuli was studied in two groups of pigeons which received response-contingent or noncontingent electric shocks in cages with translucent response keys. Presentation of grain for 3 sec, contingent on key pecking, was the visual stimulus associated with conditioned punishment or suppression. The responses of the pigeons in three different experiments are compared.

  4. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    PubMed

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  5. Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-

    PubMed Central

    Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2016-01-01

    The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631

  6. Eye vergence responses during a visual memory task.

    PubMed

    Solé Puig, Maria; Romeo, August; Cañete Crespillo, Jose; Supèr, Hans

    2017-02-08

    In a previous report it was shown that covertly attending visual stimuli produce small convergence of the eyes, and that visual stimuli can give rise to different modulations of the angle of eye vergence, depending on their power to capture attention. Working memory is highly dependent on attention. Therefore, in this study we assessed vergence responses in a memory task. Participants scanned a set of 8 or 12 images for 10 s, and thereafter were presented with a series of single images. One half were repeat images - that is, they belonged to the initial set - and the other half were novel images. Participants were asked to indicate whether or not the images were included in the initial image set. We observed that eyes converge during scanning the set of images and during the presentation of the single images. The convergence was stronger for remembered images compared with the vergence for nonremembered images. Modulation in pupil size did not correspond to behavioural responses. The correspondence between vergence and coding/retrieval processes of memory strengthen the idea of a role for vergence in attention processing of visual information.

  7. Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).

    PubMed

    Stephan, Claudia; Steurer, Michael M; Aust, Ulrike

    2014-08-01

    The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.

  8. Grammatical number agreement processing using the visual half-field paradigm: an event-related brain potential study.

    PubMed

    Kemmer, Laura; Coulson, Seana; Kutas, Marta

    2014-02-01

    Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere's processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun ("The grateful niece asked herself/*themselves…") or morphologically, e.g., subject/verb ("Industrial scientists develop/*develops…"). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Grammatical number agreement processing using the visual half-field paradigm: An event-related brain potential study

    PubMed Central

    Kemmer, Laura; Coulson, Seana; Kutas, Marta

    2014-01-01

    Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere’s processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun (“The grateful niece asked herself/*themselves…”) or morphologically, e.g., subject/verb (“Industrial scientists develop/*develops…”). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. PMID:24326084

  10. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Value associations of irrelevant stimuli modify rapid visual orienting.

    PubMed

    Rutherford, Helena J V; O'Brien, Jennifer L; Raymond, Jane E

    2010-08-01

    In familiar environments, goal-directed visual behavior is often performed in the presence of objects with strong, but task-irrelevant, reward or punishment associations that are acquired through prior, unrelated experience. In a two-phase experiment, we asked whether such stimuli could affect speeded visual orienting in a classic visual orienting paradigm. First, participants learned to associate faces with monetary gains, losses, or no outcomes. These faces then served as brief, peripheral, uninformative cues in an explicitly unrewarded, unpunished, speeded, target localization task. Cues preceded targets by either 100 or 1,500 msec and appeared at either the same or a different location. Regardless of interval, reward-associated cues slowed responding at cued locations, as compared with equally familiar punishment-associated or no-value cues, and had no effect when targets were presented at uncued locations. This localized effect of reward-associated cues is consistent with adaptive models of inhibition of return and suggests rapid, low-level effects of motivation on visual processing.

  12. Compatibility of Motion Facilitates Visuomotor Synchronization

    ERIC Educational Resources Information Center

    Hove, Michael J.; Spivey, Michael J.; Krumhansl, Carol L.

    2010-01-01

    Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1,…

  13. An invisible touch: Body-related multisensory conflicts modulate visual consciousness.

    PubMed

    Salomon, Roy; Galli, Giulia; Łukowska, Marta; Faivre, Nathan; Ruiz, Javier Bello; Blanke, Olaf

    2016-07-29

    The majority of scientific studies on consciousness have focused on vision, exploring the cognitive and neural mechanisms of conscious access to visual stimuli. In parallel, studies on bodily consciousness have revealed that bodily (i.e. tactile, proprioceptive, visceral, vestibular) signals are the basis for the sense of self. However, the role of bodily signals in the formation of visual consciousness is not well understood. Here we investigated how body-related visuo-tactile stimulation modulates conscious access to visual stimuli. We used a robotic platform to apply controlled tactile stimulation to the participants' back while they viewed a dot moving either in synchrony or asynchrony with the touch on their back. Critically, the dot was rendered invisible through continuous flash suppression. Manipulating the visual context by presenting the dot moving on either a body form, or a non-bodily object we show that: (i) conflict induced by synchronous visuo-tactile stimulation in a body context is associated with a delayed conscious access compared to asynchronous visuo-tactile stimulation, (ii) this effect occurs only in the context of a visual body form, and (iii) is not due to detection or response biases. The results indicate that body-related visuo-tactile conflicts impact visual consciousness by facilitating access of non-conflicting visual information to awareness, and that these are sensitive to the visual context in which they are presented, highlighting the interplay between bodily signals and visual experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Simultaneous odour-face presentation strengthens hedonic evaluations and event-related potential responses influenced by unpleasant odour.

    PubMed

    Cook, Stephanie; Kokmotou, Katerina; Soto, Vicente; Wright, Hazel; Fallon, Nicholas; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej

    2018-04-13

    Odours alter evaluations of concurrently presented visual stimuli, such as faces. Stimulus onset asynchrony (SOA) is known to affect evaluative priming in various sensory modalities. However, effects of SOA on odour priming of visual stimuli are not known. The present study aimed to analyse whether subjective and cortical activation changes during odour priming would vary as a function of SOA between odours and faces. Twenty-eight participants rated faces under pleasant, unpleasant, and no-odour conditions using visual analogue scales. In half of trials, faces appeared one-second after odour offset (SOA 1). In the other half of trials, faces appeared during the odour pulse (SOA 2). EEG was recorded continuously using a 128-channel system, and event-related potentials (ERPs) to face stimuli were evaluated using statistical parametric mapping (SPM). Faces presented during unpleasant-odour stimulation were rated significantly less pleasant than the same faces presented one-second after offset of the unpleasant odour. Scalp-time clusters in the late-positive-potential (LPP) time-range showed an interaction between odour and SOA effects, whereby activation was stronger for faces presented simultaneously with the unpleasant odour, compared to the same faces presented after odour offset. Our results highlight stronger unpleasant odour priming with simultaneous, compared to delayed, odour-face presentation. Such effects were represented in both behavioural and neural data. A greater cortical and subjective response during simultaneous presentation of faces and unpleasant odour may have an adaptive role, allowing for a prompt and focused behavioural reaction to a concurrent stimulus if an aversive odour would signal danger, or unwanted social interaction. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Hemispheric specialization for global and local processing: A direct comparison of linguistic and non-linguistic stimuli.

    PubMed

    Brederoo, Sanne G; Nieuwenstein, Mark R; Lorist, Monicque M; Cornelissen, Frans W

    2017-12-01

    It is often assumed that the human brain processes the global and local properties of visual stimuli in a lateralized fashion, with a left hemisphere (LH) specialization for local detail, and a right hemisphere (RH) specialization for global form. However, the evidence for such global-local lateralization stems predominantly from studies using linguistic stimuli, the processing of which has shown to be LH lateralized in itself. In addition, some studies have reported a reversal of global-local lateralization when using non-linguistic stimuli. Accordingly, it remains unclear whether global-local lateralization may in fact be stimulus-specific. To address this issue, we asked participants to respond to linguistic and non-linguistic stimuli that were presented in the right and left visual fields, allowing for first access by the LH and RH, respectively. The results showed global-RH and local-LH advantages for both stimulus types, but the global lateralization effect was larger for linguistic stimuli. Furthermore, this pattern of results was found to be robust, as it was observed regardless of two other task manipulations. We conclude that the instantiation and direction of global and local lateralization is not stimulus-specific. However, the magnitude of global,-but not local-, lateralization is dependent on stimulus type. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Mirror me: Imitative responses in adults with autism.

    PubMed

    Schunke, Odette; Schöttle, Daniel; Vettorazzi, Eik; Brandt, Valerie; Kahl, Ursula; Bäumer, Tobias; Ganos, Christos; David, Nicole; Peiker, Ina; Engel, Andreas K; Brass, Marcel; Münchau, Alexander

    2016-02-01

    Dysfunctions of the human mirror neuron system have been postulated to underlie some deficits in autism spectrum disorders including poor imitative performance and impaired social skills. Using three reaction time experiments addressing mirror neuron system functions under simple and complex conditions, we examined 20 adult autism spectrum disorder participants and 20 healthy controls matched for age, gender and education. Participants performed simple finger-lifting movements in response to (1) biological finger and non-biological dot movement stimuli, (2) acoustic stimuli and (3) combined visual-acoustic stimuli with different contextual (compatible/incompatible) and temporal (simultaneous/asynchronous) relation. Mixed model analyses revealed slower reaction times in autism spectrum disorder. Both groups responded faster to biological compared to non-biological stimuli (Experiment 1) implying intact processing advantage for biological stimuli in autism spectrum disorder. In Experiment 3, both groups had similar 'interference effects' when stimuli were presented simultaneously. However, autism spectrum disorder participants had abnormally slow responses particularly when incompatible stimuli were presented consecutively. Our results suggest imitative control deficits rather than global imitative system impairments. © The Author(s) 2015.

  17. Empathy, Pain and Attention: Cues that Predict Pain Stimulation to the Partner and the Self Capture Visual Attention

    PubMed Central

    Wu, Lingdan; Kirmse, Ursula; Flaisch, Tobias; Boiandina, Ganna; Kenter, Anna; Schupp, Harald T.

    2017-01-01

    Empathy motivates helping and cooperative behaviors and plays an important role in social interactions and personal communication. The present research examined the hypothesis that a state of empathy guides attention towards stimuli significant to others in a similar way as to stimuli relevant to the self. Sixteen couples in romantic partnerships were examined in a pain-related empathy paradigm including an anticipation phase and a stimulation phase. Abstract visual symbols (i.e., arrows and flashes) signaled the delivery of a Pain or Nopain stimulus to the partner or the self while dense sensor event-related potentials (ERPs) were simultaneously recorded from both persons. During the anticipation phase, stimuli predicting Pain compared to Nopain stimuli to the partner elicited a larger early posterior negativity (EPN) and late positive potential (LPP), which were similar in topography and latency to the EPN and LPP modulations elicited by stimuli signaling pain for the self. Noteworthy, using abstract cue symbols to cue Pain and Nopain stimuli suggests that these effects are not driven by perceptual features. The findings demonstrate that symbolic stimuli relevant for the partner capture attention, which implies a state of empathy to the pain of the partner. From a broader perspective, states of empathy appear to regulate attention processing according to the perceived needs and goals of the partner. PMID:28979199

  18. Potentiation of the early visual response to learned danger signals in adults and adolescents

    PubMed Central

    Howsley, Philippa; Jordan, Jeff; Johnston, Pat

    2015-01-01

    The reinforcing effects of aversive outcomes on avoidance behaviour are well established. However, their influence on perceptual processes is less well explored, especially during the transition from adolescence to adulthood. Using electroencephalography, we examined whether learning to actively or passively avoid harm can modulate early visual responses in adolescents and adults. The task included two avoidance conditions, active and passive, where two different warning stimuli predicted the imminent, but avoidable, presentation of an aversive tone. To avoid the aversive outcome, participants had to learn to emit an action (active avoidance) for one of the warning stimuli and omit an action for the other (passive avoidance). Both adults and adolescents performed the task with a high degree of accuracy. For both adolescents and adults, increased N170 event-related potential amplitudes were found for both the active and the passive warning stimuli compared with control conditions. Moreover, the potentiation of the N170 to the warning stimuli was stable and long lasting. Developmental differences were also observed; adolescents showed greater potentiation of the N170 component to danger signals. These findings demonstrate, for the first time, that learned danger signals in an instrumental avoidance task can influence early visual sensory processes in both adults and adolescents. PMID:24652856

  19. Are visual threats prioritized without awareness? A critical review and meta-analysis involving 3 behavioral paradigms and 2696 observers.

    PubMed

    Hedger, Nicholas; Gray, Katie L H; Garner, Matthew; Adams, Wendy J

    2016-09-01

    Given capacity limits, only a subset of stimuli give rise to a conscious percept. Neurocognitive models suggest that humans have evolved mechanisms that operate without awareness and prioritize threatening stimuli over neutral stimuli in subsequent perception. In this meta-analysis, we review evidence for this 'standard hypothesis' emanating from 3 widely used, but rather different experimental paradigms that have been used to manipulate awareness. We found a small pooled threat-bias effect in the masked visual probe paradigm, a medium effect in the binocular rivalry paradigm and highly inconsistent effects in the breaking continuous flash suppression paradigm. Substantial heterogeneity was explained by the stimulus type: the only threat stimuli that were robustly prioritized across all 3 paradigms were fearful faces. Meta regression revealed that anxiety may modulate threat-biases, but only under specific presentation conditions. We also found that insufficiently rigorous awareness measures, inadequate control of response biases and low level confounds may undermine claims of genuine unconscious threat processing. Considering the data together, we suggest that uncritical acceptance of the standard hypothesis is premature: current behavioral evidence for threat-sensitive visual processing that operates without awareness is weak. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Preserved subliminal processing and impaired conscious access in schizophrenia

    PubMed Central

    Del Cul, Antoine; Dehaene, Stanislas; Leboyer, Marion

    2006-01-01

    Background Studies of visual backward masking have frequently revealed an elevated masking threshold in schizophrenia. This finding has frequently been interpreted as indicating a low-level visual deficit. However, more recent models suggest that masking may also involve late and higher-level integrative processes, while leaving intact early “bottom-up” visual processing. Objectives We tested the hypothesis that the backward masking deficit in schizophrenia corresponds to a deficit in the late stages of conscious perception, whereas the subliminal processing of masked stimuli is fully preserved. Method 28 patients with schizophrenia and 28 normal controls performed two backward-masking experiments. We used Arabic digits as stimuli and varied quasi-continuously the interval with a subsequent mask, thus allowing us to progressively “unmask” the stimuli. We finely quantified their degree of visibility using both objective and subjective measures to evaluate the threshold duration for access to consciousness. We also studied the priming effect caused by the variably masked numbers on a comparison task performed on a subsequently presented and highly visible target number. Results The threshold delay between digit and mask necessary for the conscious perception of the masked stimulus was longer in patients compared to control subjects. This higher consciousness threshold in patients was confirmed by an objective and a subjective measure, and both measures were highly correlated for patients as well as for controls. However, subliminal priming of masked numbers was effective and identical in patients compared to controls. Conclusions Access to conscious report of masked stimuli is impaired in schizophrenia, while fast bottom-up processing of the same stimuli, as assessed by subliminal priming, is preserved. These findings suggest a high-level origin of the masking deficit in schizophrenia, although they leave open for further research its exact relation to previously identified bottom-up visual processing abnormalities. PMID:17146006

  1. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity

    PubMed Central

    Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models. PMID:27536226

  2. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity.

    PubMed

    Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  3. Exposure to subliminal arousing stimuli induces robust activation in the amygdala, hippocampus, anterior cingulate, insular cortex and primary visual cortex: a systematic meta-analysis of fMRI studies.

    PubMed

    Brooks, S J; Savov, V; Allzén, E; Benedict, C; Fredriksson, R; Schiöth, H B

    2012-02-01

    Functional Magnetic Resonance Imaging (fMRI) demonstrates that the subliminal presentation of arousing stimuli can activate subcortical brain regions independently of consciousness-generating top-down cortical modulation loops. Delineating these processes may elucidate mechanisms for arousal, aberration in which may underlie some psychiatric conditions. Here we are the first to review and discuss four Activation Likelihood Estimation (ALE) meta-analyses of fMRI studies using subliminal paradigms. We find a maximum of 9 out of 12 studies using subliminal presentation of faces contributing to activation of the amygdala, and also a significantly high number of studies reporting activation in the bilateral anterior cingulate, bilateral insular cortex, hippocampus and primary visual cortex. Subliminal faces are the strongest modality, whereas lexical stimuli are the weakest. Meta-analyses independent of studies using Regions of Interest (ROI) revealed no biasing effect. Core neuronal arousal in the brain, which may be at first independent of conscious processing, potentially involves a network incorporating primary visual areas, somatosensory, implicit memory and conflict monitoring regions. These data could provide candidate brain regions for the study of psychiatric disorders associated with aberrant automatic emotional processing. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Selective attention to task-irrelevant emotional distractors is unaffected by the perceptual load associated with a foreground task.

    PubMed

    Hindi Attar, Catherine; Müller, Matthias M

    2012-01-01

    A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs) elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load) or discrimination (high load) task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS) flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.

  5. Do working memory-driven attention shifts speed up visual awareness?

    PubMed

    Pan, Yi; Cheng, Qiu-Ping

    2011-11-01

    Previous research has shown that content representations in working memory (WM) can bias attention in favor of matching stimuli in the scene. Using a visual prior-entry procedure, we here investigate whether such WM-driven attention shifts can speed up the conscious awareness of memory-matching relative to memory-mismatching stimuli. Participants were asked to hold a color cue in WM and to subsequently perform a temporal order judgment (TOJ) task by reporting either of two different-colored circles (presented to the left and right of fixation with a variable temporal interval) as having the first onset. One of the two TOJ circles could match the memory cue in color. We found that awareness of the temporal order of the circle onsets was not affected by the contents of WM, even when participants were explicitly informed that one of the TOJ circles would always match the WM contents. The null effect of WM on TOJs was not due to an inability of the memory-matching item to capture attention, since response times to the target in a follow-up experiment were improved when it appeared at the location of the memory-matching item. The present findings suggest that WM-driven attention shifts cannot accelerate phenomenal awareness of matching stimuli in the visual field.

  6. Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.

    PubMed

    Alexander, Gerianne M; Charles, Nora

    2009-06-01

    An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.

  7. Arousal (but not valence) amplifies the impact of salience.

    PubMed

    Sutherland, Matthew R; Mather, Mara

    2018-05-01

    Previous findings indicate that negative arousal enhances bottom-up attention biases favouring perceptual salient stimuli over less salient stimuli. The current study tests whether those effects were driven by emotional arousal or by negative valence by comparing how well participants could identify visually presented letters after hearing either a negative arousing, positive arousing or neutral sound. On each trial, some letters were presented in a high contrast font and some in a low contrast font, creating a set of targets that differed in perceptual salience. Sounds rated as more emotionally arousing led to more identification of highly salient letters but not of less salient letters, whereas sounds' valence ratings did not impact salience biases. Thus, arousal, rather than valence, is a key factor enhancing visual processing of perceptually salient targets.

  8. Extinction and anti-extinction: the "attentional waiting" hypothesis.

    PubMed

    Watling, Rosamond; Danckert, James; Linnell, Karina J; Cocchini, Gianna

    2013-03-01

    Patients with visual extinction have difficulty detecting a single contralesional stimulus when a second stimulus is simultaneously presented on the ipsilesional side. The rarely reported phenomenon of visual anti-extinction describes the opposite behavior, in which patients show greater difficulty in reporting a stimulus presented in isolation than they do in reporting 2 simultaneously presented stimuli. S. J. Goodrich and R. Ward (1997, Anti-extinction following unilateral parietal damage, Cognitive Neuropsychology, Vol. 14, pp. 595-612) suggested that visual anti-extinction is the result of a task-specific mechanism in which processing of the ipsilesional stimulus facilitates responses to the contralesional stimulus; in contrast, G. W. Humphreys, M. J. Riddoch, G. Nys, and D. Heinke (2002, Transient binding by time: Neuropsychological evidence from anti-extinction, Cognitive Neuropsychology, Vol. 19, pp. 361-380) suggested that temporal binding groups contralesional and ipsilesional stimuli together at brief exposure durations. We investigated extinction and anti-extinction phenomena in 3 brain-damaged patients using an extinction paradigm in which the stimulus exposure duration was systematically manipulated. Two patients showed both extinction and anti-extinction depending on the exposure duration of stimuli. Data confirmed the crucial role of duration in modulating the effect of extinction and anti-extinction. However, contrary to Humphreys and colleagues' (2002) single case, our patients showed extinction for short and anti-extinction for long exposure durations, suggesting that different mechanisms might underlie our patients' pattern of data. We discuss a novel "attentional waiting" hypothesis, which proposes that anti-extinction may be observed in patients showing extinction if the exposure duration of stimuli is increased. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  9. Using complex auditory-visual samples to produce emergent relations in children with autism.

    PubMed

    Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P

    2010-03-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.

  10. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Dissociated roles of the parietal and frontal cortices in the scope and control of attention during visual working memory.

    PubMed

    Li, Siyao; Cai, Ying; Liu, Jing; Li, Dawei; Feng, Zifang; Chen, Chuansheng; Xue, Gui

    2017-04-01

    Mounting evidence suggests that multiple mechanisms underlie working memory capacity. Using transcranial direct current stimulation (tDCS), the current study aimed to provide causal evidence for the neural dissociation of two mechanisms underlying visual working memory (WM) capacity, namely, the scope and control of attention. A change detection task with distractors was used, where a number of colored bars (i.e., two red bars, four red bars, or two red plus two blue bars) were presented on both sides (Experiment 1) or the center (Experiment 2) of the screen for 100ms, and participants were instructed to remember the red bars and to ignore the blue bars (in both Experiments), as well as to ignore the stimuli on the un-cued side (Experiment 1 only). In both experiments, participants finished three sessions of the task after 15min of 1.5mA anodal tDCS administered on the right prefrontal cortex (PFC), the right posterior parietal cortex (PPC), and the primary visual cortex (VC), respectively. The VC stimulation served as an active control condition. We found that compared to stimulation on the VC, stimulation on the right PPC specifically increased the visual WM capacity under the no-distractor condition (i.e., 4 red bars), whereas stimulation on the right PFC specifically increased the visual WM capacity under the distractor condition (i.e., 2 red bars plus 2 blue bars). These results suggest that the PPC and PFC are involved in the scope and control of attention, respectively. We further showed that compared to central presentation of the stimuli (Experiment 2), bilateral presentation of the stimuli (on both sides of the fixation in Experiment 1) led to an additional demand for attention control. Our results emphasize the dissociated roles of the frontal and parietal lobes in visual WM capacity, and provide a deeper understanding of the neural mechanisms of WM. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. The Human Brain Uses Noise

    NASA Astrophysics Data System (ADS)

    Mori, Toshio; Kai, Shoichi

    2003-05-01

    We present the first observation of stochastic resonance (SR) in the human brain's visual processing area. The novel experimental protocol is to stimulate the right eye with a sub-threshold periodic optical signal and the left eye with a noisy one. The stimuli bypass sensory organs and are mixed in the visual cortex. With many noise sources present in the brain, higher brain functions, e.g. perception and cognition, may exploit SR.

  13. Effects of spatial cues on color-change detection in humans

    PubMed Central

    Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.

    2015-01-01

    Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359

  14. Unconscious integration of multisensory bodily inputs in the peripersonal space shapes bodily self-consciousness.

    PubMed

    Salomon, Roy; Noel, Jean-Paul; Łukowska, Marta; Faivre, Nathan; Metzinger, Thomas; Serino, Andrea; Blanke, Olaf

    2017-09-01

    Recent studies have highlighted the role of multisensory integration as a key mechanism of self-consciousness. In particular, integration of bodily signals within the peripersonal space (PPS) underlies the experience of the self in a body we own (self-identification) and that is experienced as occupying a specific location in space (self-location), two main components of bodily self-consciousness (BSC). Experiments investigating the effects of multisensory integration on BSC have typically employed supra-threshold sensory stimuli, neglecting the role of unconscious sensory signals in BSC, as tested in other consciousness research. Here, we used psychophysical techniques to test whether multisensory integration of bodily stimuli underlying BSC also occurs for multisensory inputs presented below the threshold of conscious perception. Our results indicate that visual stimuli rendered invisible through continuous flash suppression boost processing of tactile stimuli on the body (Exp. 1), and enhance the perception of near-threshold tactile stimuli (Exp. 2), only once they entered PPS. We then employed unconscious multisensory stimulation to manipulate BSC. Participants were presented with tactile stimulation on their body and with visual stimuli on a virtual body, seen at a distance, which were either visible or rendered invisible. We found that participants reported higher self-identification with the virtual body in the synchronous visuo-tactile stimulation (as compared to asynchronous stimulation; Exp. 3), and shifted their self-location toward the virtual body (Exp.4), even if stimuli were fully invisible. Our results indicate that multisensory inputs, even outside of awareness, are integrated and affect the phenomenological content of self-consciousness, grounding BSC firmly in the field of psychophysical consciousness studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Cortical Integration of Audio-Visual Information

    PubMed Central

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  16. Stimulus novelty, task relevance and the visual evoked potential in man

    NASA Technical Reports Server (NTRS)

    Courchesne, E.; Hillyard, S. A.; Galambos, R.

    1975-01-01

    The effect of task relevance on P3 (waveform of human evoked potential) waves and the methodologies used to deal with them are outlined. Visual evoked potentials (VEPs) were recorded from normal adult subjects performing in a visual discrimination task. Subjects counted the number of presentations of the numeral 4 which was interposed rarely and randomly within a sequence of tachistoscopically flashed background stimuli. Intrusive, task-irrelevant (not counted) stimuli were also interspersed rarely and randomly in the sequence of 2s; these stimuli were of two types: simples, which were easily recognizable, and novels, which were completely unrecognizable. It was found that the simples and the counted 4s evoked posteriorly distributed P3 waves while the irrelevant novels evoked large, frontally distributed P3 waves. These large, frontal P3 waves to novels were also found to be preceded by large N2 waves. These findings indicate that the P3 wave is not a unitary phenomenon but should be considered in terms of a family of waves, differing in their brain generators and in their psychological correlates.

  17. Blood Oxygen Level-Dependent Activation of the Primary Visual Cortex Predicts Size Adaptation Illusion

    PubMed Central

    Pooresmaeili, Arezoo; Arrighi, Roberto; Biagi, Laura; Morrone, Maria Concetta

    2016-01-01

    In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene. PMID:24089504

  18. Validation of auditory detection response task method for assessing the attentional effects of cognitive load.

    PubMed

    Stojmenova, Kristina; Sodnik, Jaka

    2018-07-04

    There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.

  19. Effect of a combination of flip and zooming stimuli on the performance of a visual brain-computer interface for spelling.

    PubMed

    Cheng, Jiao; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Bei; Wang, Xingyu; Cichocki, Andrzej

    2018-02-13

    Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a "zooming" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).

  20. A behavioural investigation of human visual short term memory for colour.

    PubMed

    Nemes, V A; Parry, N R A; McKeefry, D J

    2010-09-01

    We examined visual short term memory (VSTM) for colour using a delayed-match-to-sample paradigm. In these experiments we measured the effects of increasing inter-stimulus interval (ISI), varying between 0 and 10 s, on the ability of five colour normal human observers to make colour matches between a reference and subsequently presented test stimuli. The coloured stimuli used were defined by different chromatic axes on the isoluminant plane of DKL colour space. In preliminary experiments we used a hue scaling procedure to identify a total of 12 colour stimuli which served as reference hues in the colour memory experiments: four stimuli were exemplars of red, green, blue and yellow colour appearance categories, four were located between these categories and a further four were located on the cardinal axes that isolated the activity of the cone-opponent mechanisms. Our results demonstrate that there is a reduction in the ability of observers to make accurate colour matches with increasing ISIs and that this reduced performance was similar for all colour stimuli. However, the shifts in hue that were measured between the reference and matched test stimuli were significantly greater for the cardinal stimuli compared to those measured for the stimuli defined by the hue scaling procedure. This deterioration in the retention of hue in VSTM for stimuli that isolate cone-opponent mechanisms may be a reflection of the reorganisation of colour processing that occurs in the cortex where colour appearance mechanisms become more prominent. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  1. Effect of attentional load on audiovisual speech perception: evidence from ERPs

    PubMed Central

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922

  2. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  4. Social and nonsocial affective processing in schizophrenia - An ERP study.

    PubMed

    Okruszek, Ł; Wichniak, A; Jarkiewicz, M; Schudy, A; Gola, M; Jednoróg, K; Marchewka, A; Łojek, E

    2016-09-01

    Despite social cognitive dysfunction that may be observed in patients with schizophrenia, the knowledge about social and nonsocial affective processing in schizophrenia is scant. The aim of this study was to examine neurophysiological and behavioural responses to neutral and negative stimuli with (faces, people) and without (animals, objects) social content in schizophrenia. Twenty-six patients with schizophrenia (SCZ) and 21 healthy controls (HC) completed a visual oddball paradigm with either negative or neutral pictures from the Nencki Affective Picture System (NAPS) as targets while EEG was recorded. Half of the stimuli within each category presented social content (faces, people). Negative stimuli with social content produced lower N2 amplitude and higher mean LPP than any other type of stimuli in both groups. Despite differences in behavioural ratings and alterations in ERP processing of affective stimuli (lack of EPN differentiation, decreased P3 to neutral stimuli) SCZ were still able to respond to specific categories of stimuli similarly to HC. The pattern of results suggests that with no additional emotion-related task demands patients with schizophrenia may present similar attentional engagement with negative social stimuli as healthy controls. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  6. Sequential Ideal-Observer Analysis of Visual Discriminations.

    ERIC Educational Resources Information Center

    Geisler, Wilson S.

    1989-01-01

    A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)

  7. The Influences of Static and Interactive Dynamic Facial Stimuli on Visual Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…

  8. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    PubMed

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  9. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    PubMed Central

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  10. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    PubMed

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  11. Fear Conditioning in an Abdominal Pain Model: Neural Responses during Associative Learning and Extinction in Healthy Subjects

    PubMed Central

    Kattoor, Joswin; Gizewski, Elke R.; Kotsis, Vassilios; Benson, Sven; Gramsch, Carolin; Theysohn, Nina; Maderwald, Stefan; Forsting, Michael; Schedlowski, Manfred; Elsenbruch, Sigrid

    2013-01-01

    Fear conditioning is relevant for elucidating the pathophysiology of anxiety, but may also be useful in the context of chronic pain syndromes which often overlap with anxiety. Thus far, no fear conditioning studies have employed aversive visceral stimuli from the lower gastrointestinal tract. Therefore, we implemented a fear conditioning paradigm to analyze the conditioned response to rectal pain stimuli using fMRI during associative learning, extinction and reinstatement. In N = 21 healthy humans, visual conditioned stimuli (CS+) were paired with painful rectal distensions as unconditioned stimuli (US), while different visual stimuli (CS−) were presented without US. During extinction, all CSs were presented without US, whereas during reinstatement, a single, unpaired US was presented. In region-of-interest analyses, conditioned anticipatory neural activation was assessed along with perceived CS-US contingency and CS unpleasantness. Fear conditioning resulted in significant contingency awareness and valence change, i.e., learned unpleasantness of a previously neutral stimulus. This was paralleled by anticipatory activation of the anterior cingulate cortex, the somatosensory cortex and precuneus (all during early acquisition) and the amygdala (late acquisition) in response to the CS+. During extinction, anticipatory activation of the dorsolateral prefrontal cortex to the CS− was observed. In the reinstatement phase, a tendency for parahippocampal activation was found. Fear conditioning with rectal pain stimuli is feasible and leads to learned unpleasantness of previously neutral stimuli. Within the brain, conditioned anticipatory activations are seen in core areas of the central fear network including the amygdala and the anterior cingulate cortex. During extinction, conditioned responses quickly disappear, and learning of new predictive cue properties is paralleled by prefrontal activation. A tendency for parahippocampal activation during reinstatement could indicate a reactivation of the old memory trace. Together, these findings contribute to our understanding of aversive visceral learning and memory processes relevant to the pathophysiology of chronic abdominal pain. PMID:23468832

  12. Stimulus size and eccentricity in visually induced perception of horizontally translational self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    The effects of the size and eccentricity of the visual stimulus upon visually induced perception of self-motion (vection) were examined with various sizes of central and peripheral visual stimulation. Analysis indicated the strength of vection increased linearly with the size of the area in which the moving pattern was presented, but there was no difference in vection strength between central and peripheral stimuli when stimulus sizes were the same. Thus, the effect of stimulus size is homogeneous across eccentricities in the visual field.

  13. Evolutionary relevance facilitates visual information processing.

    PubMed

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  14. Increasing Valid Profiles in Phallometric Assessment of Sex Offenders with Child Victims: Combining the Strengths of Audio Stimuli and Synthetic Characters.

    PubMed

    Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice

    2018-02-01

    Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.

  15. Interactions of Top-Down and Bottom-Up Mechanisms in Human Visual Cortex

    PubMed Central

    McMains, Stephanie; Kastner, Sabine

    2011-01-01

    Multiple stimuli present in the visual field at the same time compete for neural representation by mutually suppressing their evoked activity throughout visual cortex, providing a neural correlate for the limited processing capacity of the visual system. Competitive interactions among stimuli can be counteracted by top-down, goal-directed mechanisms such as attention, and by bottom-up, stimulus-driven mechanisms. Because these two processes cooperate in everyday life to bias processing toward behaviorally relevant or particularly salient stimuli, it has proven difficult to study interactions between top-down and bottom-up mechanisms. Here, we used an experimental paradigm in which we first isolated the effects of a bottom-up influence on neural competition by parametrically varying the degree of perceptual grouping in displays that were not attended. Second, we probed the effects of directed attention on the competitive interactions induced with the parametric design. We found that the amount of attentional modulation varied linearly with the degree of competition left unresolved by bottom-up processes, such that attentional modulation was greatest when neural competition was little influenced by bottom-up mechanisms and smallest when competition was strongly influenced by bottom-up mechanisms. These findings suggest that the strength of attentional modulation in the visual system is constrained by the degree to which competitive interactions have been resolved by bottom-up processes related to the segmentation of scenes into candidate objects. PMID:21228167

  16. First- and second-order contrast sensitivity functions reveal disrupted visual processing following mild traumatic brain injury.

    PubMed

    Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza

    2016-05-01

    Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    PubMed

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  18. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  19. Effects of emotional valence and three-dimensionality of visual stimuli on brain activation: an fMRI study.

    PubMed

    Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro

    2013-01-01

    Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.

  20. Effects of Visual and Verbal Stimuli on Children's Learning of Concrete and Abstract Prose.

    ERIC Educational Resources Information Center

    Hannafin, Michael J.; Carey, James O.

    A total of 152 fourth grade students participated in a study examining the effects of visual-only, verbal-only, and combined audiovisual prose presentations and different elaboration strategy conditions on student learning of abstract and concrete prose. The students saw and/or heard a short animated story, during which they were instructed to…

  1. Communication Variables Associated with Hearing-Impaired/Vision-Impaired Persons--A Pilot-Study.

    ERIC Educational Resources Information Center

    Hicks, Wanda M.

    1979-01-01

    A study involving eight youths and adults with retinitis pigmentosa (and only 20 degree visual field and hearing loss of at least 20 decibels) determined variance in the ability to perceive and comprehend visual stimuli presented by way of the manual modality when modifications were made in configuration, movement speed, movement size, and…

  2. Comprehension and Production of Non-Literal Comparisons (NLC) via Visual Stimuli in Children

    ERIC Educational Resources Information Center

    Douka, Glykeria; Motsiou, Eleni; Papadopoulou, Maria

    2014-01-01

    The present study focuses on the comprehension and production of non-literal comparisons (NLC) via visual means in three age groups: kindergarten, second grade and fifth grade students. Although non-literality is a cognitive process, the educational system does not take advantage of it in pedagogy, especially before the fourth grade. The research…

  3. Exploring Antecedents of Performance Differences on Visual and Verbal Test Items: Learning Styles versus Aptitude

    ERIC Educational Resources Information Center

    Bacon, Donald R.; Hartley, Steven W.

    2015-01-01

    Many educators and researchers have suggested that some students learn more effectively with visual stimuli (e.g., pictures, graphs), whereas others learn more effectively with verbal information (e.g., text) (Felder & Brent, 2005). In two studies, the present research seeks to improve popular self-reported (indirect) learning style measures…

  4. Conditioning with compound stimuli in Drosophila melanogaster in the flight simulator.

    PubMed

    Brembs, B; Heisenberg, M

    2001-08-01

    Short-term memory in Drosophila melanogaster operant visual learning in the flight simulator is explored using patterns and colours as a compound stimulus. Presented together during training, the two stimuli accrue the same associative strength whether or not a prior training phase rendered one of the two stimuli a stronger predictor for the reinforcer than the other (no blocking). This result adds Drosophila to the list of other invertebrates that do not exhibit the robust vertebrate blocking phenomenon. Other forms of higher-order learning, however, were detected: a solid sensory preconditioning and a small second-order conditioning effect imply that associations between the two stimuli can be formed, even if the compound is not reinforced.

  5. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Visual processing during recovery from vegetative state to consciousness: comparing behavioral indices to brain responses.

    PubMed

    Wijnen, V J M; Eilander, H J; de Gelder, B; van Boxtel, G J M

    2014-11-01

    Auditory stimulation is often used to evoke responses in unresponsive patients who have suffered severe brain injury. In order to investigate visual responses, we examined visual evoked potentials (VEPs) and behavioral responses to visual stimuli in vegetative patients during recovery to consciousness. Behavioral responses to visual stimuli (visual localization, comprehension of written commands, and object manipulation) and flash VEPs were repeatedly examined in eleven vegetative patients every two weeks for an average period of 2.6months, and patients' VEPs were compared to a healthy control group. Long-term outcome of the patients was assessed 2-3years later. Visual response scores increased during recovery to consciousness for all scales: visual localization, comprehension of written commands, and object manipulation. VEP amplitudes were smaller, and latencies were longer in the patient group relative to the controls. VEPs characteristics at first measurement were related to long-term outcome up to three years after injury. Our findings show the improvement of visual responding with recovery from the vegetative state to consciousness. Elementary visual processing is present, yet according to VEP responses, poorer in vegetative and minimally conscious state than in healthy controls, and remains poorer when patients recovered to consciousness. However, initial VEPs are related to long-term outcome. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  7. Measuring temporal summation in visual detection with a single-photon source.

    PubMed

    Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G

    2017-11-01

    Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or <30ms) while the number of photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Anticipatory versus reactive spatial attentional bias to threat.

    PubMed

    Gladwin, Thomas E; Möbius, Martin; McLoughlin, Shane; Tyndall, Ian

    2018-05-10

    Dot-probe or visual probe tasks (VPTs) are used extensively to measure attentional biases. A novel variant termed the cued VPT (cVPT) was developed to focus on the anticipatory component of attentional bias. This study aimed to establish an anticipatory attentional bias to threat using the cVPT and compare its split-half reliability with a typical dot-probe task. A total of 120 students performed the cVPT task and dot-probe tasks. Essentially, the cVPT uses cues that predict the location of pictorial threatening stimuli, but on trials on which probe stimuli are presented the pictures do not appear. Hence, actual presentation of emotional stimuli did not affect responses. The reliability of the cVPT was higher at most cue-stimulus intervals and was .56 overall. A clear anticipatory attentional bias was found. In conclusion, the cVPT may be of methodological and theoretical interest. Using visually neutral predictive cues may remove sources of noise that negatively impact reliability. Predictive cues are able to bias response selection, suggesting a role of predicted outcomes in automatic processes. © 2018 The British Psychological Society.

  9. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  10. Is improved contrast sensitivity a natural consequence of visual training?

    PubMed Central

    Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.

    2015-01-01

    Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736

  11. Characteristics of implicit chaining in cotton-top tamarins (Saguinus oedipus).

    PubMed

    Locurto, Charles; Gagne, Matthew; Nutile, Lauren

    2010-07-01

    In human cognition there has been considerable interest in observing the conditions under which subjects learn material without explicit instructions to learn. In the present experiments, we adapted this issue to nonhumans by asking what subjects learn in the absence of explicit reinforcement for correct responses. Two experiments examined the acquisition of sequence information by cotton-top tamarins (Saguinus oedipus) when such learning was not demanded by the experimental contingencies. An implicit chaining procedure was used in which visual stimuli were presented serially on a touchscreen. Subjects were required to touch one stimulus to advance to the next stimulus. Stimulus presentations followed a pattern, but learning the pattern was not necessary for reinforcement. In Experiment 1 the chain consisted of five different visual stimuli that were presented in the same order on each trial. Each stimulus could occur at any one of six touchscreen positions. In Experiment 2 the same visual element was presented serially in the same five locations on each trial, thereby allowing a behavioral pattern to be correlated with the visual pattern. In this experiment two new tests, a Wild-Card test and a Running-Start test, were used to assess what was learned in this procedure. Results from both experiments indicated that tamarins acquired more information from an implicit chain than was required by the contingencies of reinforcement. These results contribute to the developing literature on nonhuman analogs of implicit learning.

  12. Global/local processing of hierarchical visual stimuli in a conflict-choice task by capuchin monkeys (Sapajus spp.).

    PubMed

    Truppa, Valentina; Carducci, Paola; De Simone, Diego Antonio; Bisazza, Angelo; De Lillo, Carlo

    2017-03-01

    In the last two decades, comparative research has addressed the issue of how the global and local levels of structure of visual stimuli are processed by different species, using Navon-type hierarchical figures, i.e. smaller local elements that form larger global configurations. Determining whether or not the variety of procedures adopted to test different species with hierarchical figures are equivalent is of crucial importance to ensure comparability of results. Among non-human species, global/local processing has been extensively studied in tufted capuchin monkeys using matching-to-sample tasks with hierarchical patterns. Local dominance has emerged consistently in these New World primates. In the present study, we assessed capuchins' processing of hierarchical stimuli with a method frequently adopted in studies of global/local processing in non-primate species: the conflict-choice task. Different from the matching-to-sample procedure, this task involved processing local and global information retained in long-term memory. Capuchins were trained to discriminate between consistent hierarchical stimuli (similar global and local shape) and then tested with inconsistent hierarchical stimuli (different global and local shapes). We found that capuchins preferred the hierarchical stimuli featuring the correct local elements rather than those with the correct global configuration. This finding confirms that capuchins' local dominance, typically observed using matching-to-sample procedures, is also expressed as a local preference in the conflict-choice task. Our study adds to the growing body of comparative studies on visual grouping functions by demonstrating that the methods most frequently used in the literature on global/local processing produce analogous results irrespective of extent of the involvement of memory processes.

  13. Noise-Induced Entrainment and Stochastic Resonance in Human Brain Waves

    NASA Astrophysics Data System (ADS)

    Mori, Toshio; Kai, Shoichi

    2002-05-01

    We present the first observation of stochastic resonance (SR) in the human brain's visual processing area. The novel experimental protocol is to stimulate the right eye with a subthreshold periodic optical signal and the left eye with a noisy one. The stimuli bypass sensory organs and are mixed in the visual cortex. With many noise sources present in the brain, higher brain functions, e.g., perception and cognition, may exploit SR.

  14. ERP manifestations of processing printed words at different psycholinguistic levels: time course and scalp distribution.

    PubMed

    Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J

    1999-05-01

    The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.

  15. Virtual reality stimuli for force platform posturography.

    PubMed

    Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko

    2002-01-01

    People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.

  16. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    PubMed

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between groups were not only because of differences in unisensory perception but also because of differences in the process of audiovisual integration per se. Visual reduction led to an increase in the weight of audition, even in cochlear-implanted children, whose perception is generally dominated by vision. This result suggests that the natural bias in favor of vision is not immutable. Audiovisual speech integration partly depends on the experimental situation, which modulates the informational content of the sensory channels and the weight that is awarded to each of them. Consequently, participants, whether deaf with cochlear implants or having normal hearing, not only base their perception on the most reliable modality but also award it an additional weight.

  17. Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.

    PubMed

    Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J

    2017-01-01

    We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.

  18. Stress Sensitive Healthy Females Show Less Left Amygdala Activation in Response to Withdrawal-Related Visual Stimuli under Passive Viewing Conditions

    ERIC Educational Resources Information Center

    Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert

    2012-01-01

    The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…

  19. The Impact of Semantic Relevance and Heterogeneity of Pictorial Stimuli on Individual Brainstorming: An Extension of the SIAM Model

    ERIC Educational Resources Information Center

    Guo, Jing; McLeod, Poppy Lauretta

    2014-01-01

    Drawing upon the Search for Ideas in Associative Memory (SIAM) model as the theoretical framework, the impact of heterogeneity and topic relevance of visual stimuli on ideation performance was examined. Results from a laboratory experiment showed that visual stimuli increased productivity and diversity of idea generation, that relevance to the…

  20. 76 FR 10564 - Takes of Marine Mammals Incidental to Specified Activities; St. George Reef Light Station...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-25

    .... Acoustic and visual stimuli generated by: (1) Helicopter landings/takeoffs; (2) noise generated during... minimize acoustic and visual disturbances) as described in NMFS' December 22, 2010 (75 FR 80471) notice of... Activity on Marine Mammals Acoustic and visual stimuli generated by: (1) Helicopter landings/ takeoffs; (2...

Top