Sample records for auditory category task

  1. Incidental Auditory Category Learning

    PubMed Central

    Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.

    2015-01-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588

  2. Incidental category learning and cognitive load in a multisensory environment across childhood.

    PubMed

    Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z

    2018-06-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes.

    PubMed

    Jiang, Xiong; Chevillet, Mark A; Rauschecker, Josef P; Riesenhuber, Maximilian

    2018-04-18

    Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Auditory-visual object recognition time suggests specific processing for animal sounds.

    PubMed

    Suied, Clara; Viaud-Delmon, Isabelle

    2009-01-01

    Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.

  5. Multivariate sensitivity to voice during auditory categorization.

    PubMed

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  6. The Role of Age and Executive Function in Auditory Category Learning

    PubMed Central

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  7. Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall

    PubMed Central

    Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat

    2014-01-01

    Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085

  8. Auditory working memory predicts individual differences in absolute pitch learning.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the phenomenon of AP are also discussed. Copyright © 2015. Published by Elsevier B.V.

  9. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes

    PubMed Central

    Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the auditory system appear to process or extract a number of quantifiable low-order signal attributes that are characteristic of action events perceived as being object-like, representing stages that may begin to dissociate different perceptual dimensions and categories of every-day, real-world action sounds. PMID:22582038

  11. Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening.

    PubMed

    Ong, Jia Hoong; Burnham, Denis; Escudero, Paola

    2015-01-01

    This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants' auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution.

  12. Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening

    PubMed Central

    Ong, Jia Hoong; Burnham, Denis; Escudero, Paola

    2015-01-01

    This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants’ auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution. PMID:26214002

  13. How may the basal ganglia contribute to auditory categorization and speech perception?

    PubMed Central

    Lim, Sung-Joo; Fiez, Julie A.; Holt, Lori L.

    2014-01-01

    Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood. PMID:25136291

  14. Incidental learning of sound categories is impaired in developmental dyslexia.

    PubMed

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    PubMed Central

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  16. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  17. The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-01-01

    The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.

  18. Negative Priming in Free Recall Reconsidered

    PubMed Central

    2015-01-01

    Negative priming in free recall is the finding of impaired memory performance when previously ignored auditory distracters become targets of encoding and retrieval. This negative priming has been attributed to an aftereffect of deploying inhibitory mechanisms that serve to suppress auditory distraction and minimize interference with learning and retrieval of task-relevant information. In 6 experiments, we tested the inhibitory account of the effect of negative priming in free recall against alternative accounts. We found that ignoring auditory distracters is neither sufficient nor necessary to produce the effect of negative priming in free recall. Instead, the effect is more readily accounted for by a buildup of proactive interference occurring whenever 2 successively presented lists of words are drawn from the same semantic category. PMID:26595066

  19. Effects of musicality and motivational orientation on auditory category learning: a test of a regulatory-fit hypothesis.

    PubMed

    McAuley, J Devin; Henry, Molly J; Wedd, Alan; Pleskac, Timothy J; Cesario, Joseph

    2012-02-01

    Two experiments investigated the effects of musicality and motivational orientation on auditory category learning. In both experiments, participants learned to classify tone stimuli that varied in frequency and duration according to an initially unknown disjunctive rule; feedback involved gaining points for correct responses (a gains reward structure) or losing points for incorrect responses (a losses reward structure). For Experiment 1, participants were told at the start that musicians typically outperform nonmusicians on the task, and then they were asked to identify themselves as either a "musician" or a "nonmusician." For Experiment 2, participants were given either a promotion focus prime (a performance-based opportunity to gain entry into a raffle) or a prevention focus prime (a performance-based criterion that needed to be maintained to avoid losing an entry into a raffle) at the start of the experiment. Consistent with a regulatory-fit hypothesis, self-identified musicians and promotion-primed participants given a gains reward structure made more correct tone classifications and were more likely to discover the optimal disjunctive rule than were musicians and promotion-primed participants experiencing losses. Reward structure (gains vs. losses) had inconsistent effects on the performance of nonmusicians, and a weaker regulatory-fit effect was found for the prevention focus prime. Overall, the findings from this study demonstrate a regulatory-fit effect in the domain of auditory category learning and show that motivational orientation may contribute to musician performance advantages in auditory perception.

  20. Commonalities and Differences in Word Identification Skills among Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Wang, Min; Koda, Keiko

    2005-01-01

    This study examined word identification skills among Chinese and Korean college students learning to read English as a second language in a naming experiment and an auditory category judgment task. Both groups demonstrated faster and more accurate naming performance on high-frequency words than low-frequency words and on regular words than…

  1. Commonalities and Differences in Word Identification Skills among Learners of English as a Second Language

    ERIC Educational Resources Information Center

    Wang, Min; Koda, Keiko

    2007-01-01

    This study examined word identification skills between two groups of college students with different first language (L1) backgrounds (Chinese and Korean) learning to read English as a second language (ESL). Word identification skills were tested in a naming experiment and an auditory category judgment task. Both groups of ESL learners demonstrated…

  2. Metal Sounds Stiffer than Drums for Ears, but Not Always for Hands: Low-Level Auditory Features Affect Multisensory Stiffness Perception More than High-Level Categorical Information

    PubMed Central

    Liu, Juan; Ando, Hiroshi

    2016-01-01

    Most real-world events stimulate multiple sensory modalities simultaneously. Usually, the stiffness of an object is perceived haptically. However, auditory signals also contain stiffness-related information, and people can form impressions of stiffness from the different impact sounds of metal, wood, or glass. To understand whether there is any interaction between auditory and haptic stiffness perception, and if so, whether the inferred material category is the most relevant auditory information, we conducted experiments using a force-feedback device and the modal synthesis method to present haptic stimuli and impact sound in accordance with participants’ actions, and to modulate low-level acoustic parameters, i.e., frequency and damping, without changing the inferred material categories of sound sources. We found that metal sounds consistently induced an impression of stiffer surfaces than did drum sounds in the audio-only condition, but participants haptically perceived surfaces with modulated metal sounds as significantly softer than the same surfaces with modulated drum sounds, which directly opposes the impression induced by these sounds alone. This result indicates that, although the inferred material category is strongly associated with audio-only stiffness perception, low-level acoustic parameters, especially damping, are more tightly integrated with haptic signals than the material category is. Frequency played an important role in both audio-only and audio-haptic conditions. Our study provides evidence that auditory information influences stiffness perception differently in unisensory and multisensory tasks. Furthermore, the data demonstrated that sounds with higher frequency and/or shorter decay time tended to be judged as stiffer, and contact sounds of stiff objects had no effect on the haptic perception of soft surfaces. We argue that the intrinsic physical relationship between object stiffness and acoustic parameters may be applied as prior knowledge to achieve robust estimation of stiffness in multisensory perception. PMID:27902718

  3. Auditory Task Irrelevance: A Basis for Inattentional Deafness

    PubMed Central

    Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.

    2018-01-01

    Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754

  4. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    PubMed

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. The influence of linguistic experience on pitch perception in speech and nonspeech sounds

    NASA Astrophysics Data System (ADS)

    Bent, Tessa; Bradlow, Ann R.; Wright, Beverly A.

    2003-04-01

    How does native language experience with a tone or nontone language influence pitch perception? To address this question 12 English and 13 Mandarin listeners participated in an experiment involving three tasks: (1) Mandarin tone identification-a clearly linguistic task where a strong effect of language background was expected, (2) pure-tone and pulse-train frequency discrimination-a clearly nonlinguistic auditory discrimination task where no effect of language background was expected, and (3) pitch glide identification-a nonlinguistic auditory categorization task where some effect of language background was expected. As anticipated, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners (Task 1) and the two groups' pure-tone and pulse-train frequency discrimination thresholds did not differ (Task 2). For pitch glide identification (Task 3), Mandarin listeners made more identification errors: in comparison with English listeners, Mandarin listeners more frequently misidentified falling pitch glides as level, and more often misidentified level pitch ``glides'' with relatively high frequencies as rising and those with relatively low frequencies as falling. Thus, it appears that the effect of long-term linguistic experience can extend beyond lexical tone category identification in syllables to pitch class identification in certain nonspeech sounds. [Work supported by Sigma Xi and NIH.

  6. Different neural activities support auditory working memory in musicians and bilinguals.

    PubMed

    Alain, Claude; Khatamian, Yasha; He, Yu; Lee, Yunjo; Moreno, Sylvain; Leung, Ada W S; Bialystok, Ellen

    2018-05-17

    Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience. © 2018 New York Academy of Sciences.

  7. Selective impairment of auditory selective attention under concurrent cognitive load.

    PubMed

    Dittrich, Kerstin; Stahl, Christoph

    2012-06-01

    Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.

  8. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    PubMed

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  9. Selectivity of lexical-semantic disorders in Polish-speaking patients with aphasia: evidence from single-word comprehension.

    PubMed

    Jodzio, Krzysztof; Biechowska, Daria; Leszniewska-Jodzio, Barbara

    2008-09-01

    Several neuropsychological studies have shown that patients with brain damage may demonstrate selective category-specific deficits of auditory comprehension. The present paper reports on an investigation of aphasic patients' preserved ability to perform a semantic task on spoken words despite severe impairment in auditory comprehension, as shown by failure in matching spoken words to pictured objects. Twenty-six aphasic patients (11 women and 15 men) with impaired speech comprehension due to a left-hemisphere ischaemic stroke were examined; all were right-handed and native speakers of Polish. Six narrowly defined semantic categories for which dissociations have been reported are colors, body parts, animals, food, objects (mostly tools), and means of transportation. An analysis using one-way ANOVA with repeated measures in conjunction with the Lambda-Wilks Test revealed significant discrepancies among these categories in aphasic patients, who had much more difficulty comprehending names of colors than they did comprehending names of other objects (F((5,21))=13.15; p<.001). Animals were most often the easiest category to understand. The possibility of a simple explanation in terms of word frequency and/or visual complexity was ruled out. Evidence from the present study support the position that so called "global" aphasia is an imprecise term and should be redefined. These results are discussed within the connectionist and modular perspectives on category-specific deficits in aphasia.

  10. Opposite brain laterality in analogous auditory and visual tests.

    PubMed

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  11. Fit for the frontline? A focus group exploration of auditory tasks carried out by infantry and combat support personnel.

    PubMed

    Bevis, Zoe L; Semeraro, Hannah D; van Besouw, Rachel M; Rowan, Daniel; Lineton, Ben; Allsopp, Adrian J

    2014-01-01

    In order to preserve their operational effectiveness and ultimately their survival, military personnel must be able to detect important acoustic signals and maintain situational awareness. The possession of sufficient hearing ability to perform job-specific auditory tasks is defined as auditory fitness for duty (AFFD). Pure tone audiometry (PTA) is used to assess AFFD in the UK military; however, it is unclear whether PTA is able to accurately predict performance on job-specific auditory tasks. The aim of the current study was to gather information about auditory tasks carried out by infantry personnel on the frontline and the environment these tasks are performed in. The study consisted of 16 focus group interviews with an average of five participants per group. Eighty British army personnel were recruited from five infantry regiments. The focus group guideline included seven open-ended questions designed to elicit information about the auditory tasks performed on operational duty. Content analysis of the data resulted in two main themes: (1) the auditory tasks personnel are expected to perform and (2) situations where personnel felt their hearing ability was reduced. Auditory tasks were divided into subthemes of sound detection, speech communication and sound localization. Reasons for reduced performance included background noise, hearing protection and attention difficulties. The current study provided an important and novel insight to the complex auditory environment experienced by British infantry personnel and identified 17 auditory tasks carried out by personnel on operational duties. These auditory tasks will be used to inform the development of a functional AFFD test for infantry personnel.

  12. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  13. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults

    PubMed Central

    Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081

  14. Representation of Sound Categories in Auditory Cortical Maps

    ERIC Educational Resources Information Center

    Guenther, Frank H.; Nieto-Castanon, Alfonso; Ghosh, Satrajit S.; Tourville, Jason A.

    2004-01-01

    Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation…

  15. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  16. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  17. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    PubMed

    Larson, Eric; Terry, Howard P; Canevari, Margaux M; Stepp, Cara E

    2013-01-01

    Human-machine interface (HMI) designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis). Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group), and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel) targets with an unambiguous cue.

  18. Fit for the frontline? identification of mission-critical auditory tasks (MCATs) carried out by infantry and combat-support personnel.

    PubMed

    Semeraro, Hannah D; Bevis, Zoë L; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian J

    2015-01-01

    The ability to listen to commands in noisy environments and understand acoustic signals, while maintaining situational awareness, is an important skill for military personnel and can be critical for mission success. Seventeen auditory tasks carried out by British infantry and combat-support personnel were identified through a series of focus groups conducted by Bevis et al. For military personnel, these auditory tasks are termed mission-critical auditory tasks (MCATs) if they are carried in out in a military-specific environment and have a negative consequence when performed below a specified level. A questionnaire study was conducted to find out which of the auditory tasks identified by Bevis et al. satisfy the characteristics of an MCAT. Seventy-nine British infantry and combat-support personnel from four regiments across the South of England participated. For each auditory task participants indicated: 1) the consequences of poor performance on the task, 2) who performs the task, and 3) how frequently the task is carried out. The data were analysed to determine which tasks are carried out by which personnel, which have the most negative consequences when performed poorly, and which are performed the most frequently. This resulted in a list of 9 MCATs (7 speech communication tasks, 1 sound localization task, and 1 sound detection task) that should be prioritised for representation in a measure of auditory fitness for duty (AFFD) for these personnel. Incorporating MCATs in AFFD measures will help to ensure that personnel have the necessary auditory skills for safe and effective deployment on operational duties.

  19. Fit for the frontline? Identification of mission-critical auditory tasks (MCATs) carried out by infantry and combat-support personnel

    PubMed Central

    Semeraro, Hannah D.; Bevis, Zoë L.; Rowan, Daniel; van Besouw, Rachel M.; Allsopp, Adrian J.

    2015-01-01

    The ability to listen to commands in noisy environments and understand acoustic signals, while maintaining situational awareness, is an important skill for military personnel and can be critical for mission success. Seventeen auditory tasks carried out by British infantry and combat-support personnel were identified through a series of focus groups conducted by Bevis et al. For military personnel, these auditory tasks are termed mission-critical auditory tasks (MCATs) if they are carried in out in a military-specific environment and have a negative consequence when performed below a specified level. A questionnaire study was conducted to find out which of the auditory tasks identified by Bevis et al. satisfy the characteristics of an MCAT. Seventy-nine British infantry and combat-support personnel from four regiments across the South of England participated. For each auditory task participants indicated: 1) the consequences of poor performance on the task, 2) who performs the task, and 3) how frequently the task is carried out. The data were analysed to determine which tasks are carried out by which personnel, which have the most negative consequences when performed poorly, and which are performed the most frequently. This resulted in a list of 9 MCATs (7 speech communication tasks, 1 sound localization task, and 1 sound detection task) that should be prioritised for representation in a measure of auditory fitness for duty (AFFD) for these personnel. Incorporating MCATs in AFFD measures will help to ensure that personnel have the necessary auditory skills for safe and effective deployment on operational duties. PMID:25774613

  20. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  1. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex.

    PubMed

    Hausfeld, Lars; Riecke, Lars; Formisano, Elia

    2018-06-01

    Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Converging Modalities Ground Abstract Categories: The Case of Politics

    PubMed Central

    Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360

  3. Converging modalities ground abstract categories: the case of politics.

    PubMed

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  4. Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)

    PubMed Central

    Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.

    2015-01-01

    An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663

  5. Reduced auditory processing capacity during vocalization in children with Selective Mutism.

    PubMed

    Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair

    2007-02-01

    Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.

  6. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    PubMed

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  7. Behavioral semantics of learning and crossmodal processing in auditory cortex: the semantic processor concept.

    PubMed

    Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram

    2011-01-01

    Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.

  8. The effects of divided attention on auditory priming.

    PubMed

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  9. Auditory processing deficits in bipolar disorder with and without a history of psychotic features.

    PubMed

    Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N

    2015-11-01

    Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Elevated depressive symptoms enhance reflexive but not reflective auditory category learning.

    PubMed

    Maddox, W Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G

    2014-09-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Elevated Depressive Symptoms Enhance Reflexive but not Reflective Auditory Category Learning

    PubMed Central

    Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.

    2014-01-01

    In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed. PMID:25041936

  12. Auditory Attention and Comprehension During a Simulated Night Shift: Effects of Task Characteristics.

    PubMed

    Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A

    2016-11-01

    The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.

  13. How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices.

    PubMed

    Twomey, Tae; Waters, Dafydd; Price, Cathy J; Evans, Samuel; MacSweeney, Mairéad

    2017-09-27

    To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants. SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function. Copyright © 2017 Twomey et al.

  14. How auditory discontinuities and linguistic experience affect the perception of speech and non-speech in English- and Spanish-speaking listeners

    NASA Astrophysics Data System (ADS)

    Hay, Jessica F.; Holt, Lori L.; Lotto, Andrew J.; Diehl, Randy L.

    2005-04-01

    The present study was designed to investigate the effects of long-term linguistic experience on the perception of non-speech sounds in English and Spanish speakers. Research using tone-onset-time (TOT) stimuli, a type of non-speech analogue of voice-onset-time (VOT) stimuli, has suggested that there is an underlying auditory basis for the perception of stop consonants based on a threshold for detecting onset asynchronies in the vicinity of +20 ms. For English listeners, stop consonant labeling boundaries are congruent with the positive auditory discontinuity, while Spanish speakers place their VOT labeling boundaries and discrimination peaks in the vicinity of 0 ms VOT. The present study addresses the question of whether long-term linguistic experience with different VOT categories affects the perception of non-speech stimuli that are analogous in their acoustic timing characteristics. A series of synthetic VOT stimuli and TOT stimuli were created for this study. Using language appropriate labeling and ABX discrimination tasks, labeling boundaries (VOT) and discrimination peaks (VOT and TOT) are assessed for 24 monolingual English speakers and 24 monolingual Spanish speakers. The interplay between language experience and auditory biases are discussed. [Work supported by NIDCD.

  15. Missing a trick: Auditory load modulates conscious awareness in audition.

    PubMed

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.

    PubMed

    Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet

    2017-06-01

    Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.

  17. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  18. Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?

    PubMed

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.

  19. Impairment of Auditory-Motor Timing and Compensatory Reorganization after Ventral Premotor Cortex Stimulation

    PubMed Central

    Kornysheva, Katja; Schubotz, Ricarda I.

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  20. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Application of auditory signals to the operation of an agricultural vehicle: results of pilot testing.

    PubMed

    Karimi, D; Mondor, T A; Mann, D D

    2008-01-01

    The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.

  2. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Thalamic and parietal brain morphology predicts auditory category learning.

    PubMed

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  4. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users.

    PubMed

    Jaekel, Brittany N; Newman, Rochelle S; Goupell, Matthew J

    2017-05-24

    Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.

  5. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    PubMed Central

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  6. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  7. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  8. Executive function deficits in team sport athletes with a history of concussion revealed by a visual-auditory dual task paradigm.

    PubMed

    Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa

    2017-02-01

    The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.

  9. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    PubMed

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  10. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task.

    PubMed

    Anderson, Afrouz A; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Dashtestani, Hadis; Chowdhry, Fatima A; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing.

  11. Exploring the role of task performance and learning style on prefrontal hemodynamics during a working memory task

    PubMed Central

    Anderson, Afrouz A.; Parsa, Kian; Geiger, Sydney; Zaragoza, Rachel; Kermanian, Riley; Miguel, Helga; Chowdhry, Fatima A.; Smith, Elizabeth; Aram, Siamak; Gandjbakhche, Amir H.

    2018-01-01

    Existing literature outlines the quality and location of activation in the prefrontal cortex (PFC) during working memory (WM) tasks. However, the effects of individual differences on the underlying neural process of WM tasks are still unclear. In this functional near infrared spectroscopy study, we administered a visual and auditory n-back task to examine activation in the PFC while considering the influences of task performance, and preferred learning strategy (VARK score). While controlling for age, results indicated that high performance (HP) subjects (accuracy > 90%) showed task dependent lower activation compared to normal performance subjects in PFC region Specifically HP groups showed lower activation in left dorsolateral PFC (DLPFC) region during performance of auditory task whereas during visual task they showed lower activation in the right DLPFC. After accounting for learning style, we found a correlation between visual and aural VARK score and level of activation in the PFC. Subjects with higher visual VARK scores displayed lower activation during auditory task in left DLPFC, while those with higher visual scores exhibited higher activation during visual task in bilateral DLPFC. During performance of auditory task, HP subjects had higher visual VARK scores compared to NP subjects indicating an effect of learning style on the task performance and activation. The results of this study show that learning style and task performance can influence PFC activation, with applications toward neurological implications of learning style and populations with deficits in auditory or visual processing. PMID:29870536

  12. Auditory Distraction in Semantic Memory: A Process-Based Approach

    ERIC Educational Resources Information Center

    Marsh, John E.; Hughes, Robert W.; Jones, Dylan M.

    2008-01-01

    Five experiments demonstrate auditory-semantic distraction in tests of memory for semantic category-exemplars. The effects of irrelevant sound on category-exemplar recall are shown to be functionally distinct from those found in the context of serial short-term memory by showing sensitivity to: The lexical-semantic, rather than acoustic,…

  13. Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations.

    PubMed

    Shuster, Anastasia; Levy, Dino J

    2018-01-01

    Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing.

  14. Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations

    PubMed Central

    2018-01-01

    Abstract Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing. PMID:29619408

  15. The 'F-complex' and MMN tap different aspects of deviance.

    PubMed

    Laufer, Ilan; Pratt, Hillel

    2005-02-01

    To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.

  16. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  17. Benefit and predictive factors for speech perception outcomes in pediatric bilateral cochlear implant recipients.

    PubMed

    Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon

    2018-05-18

    Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  18. The effect of postsurgical pain on attentional processing in horses.

    PubMed

    Dodds, Louise; Knight, Laura; Allen, Kate; Murrell, Joanna

    2017-07-01

    To investigate the effect of postsurgical pain on the performance of horses in a novel object and auditory startle task. Prospective clinical study. Twenty horses undergoing different types of surgery and 16 control horses that did not undergo surgery. The interaction of 36 horses with novel objects and a response to an auditory stimulus were measured at two time points; the day before surgery (T1) and the day after surgery (T2) for surgical horses (G1), and at a similar time interval for control horses (G2). Pain and sedation were measured using simple descriptive scales at the time the tests were carried out. Total time or score attributed to each of the behavioural categories was compared between groups (G1 and G2) for each test and between tests (T1 and T2) for each group. The median (range) time spent interacting with novel objects was reduced in G1 from 58 (6-367) seconds in T1 to 12 (0-495) seconds in T2 (p=0.0005). In G2 the change in interaction time between T1 and T2 was not statistically significant. Median (range) total auditory score was 7 (3-12) and 10 (1-12) in G1 and G2, respectively, at T1, decreasing to 6 (0-10) in G1 after surgery and 9.5 (1-12) in G2 (p=0.0003 and p=0.94, respectively). There was a difference in total auditory score between G1 and G2 at T2 (p=0.0169), with the score being lower in G1 than G2. Postsurgical pain negatively impacts attention towards novel objects and causes a decreased responsiveness to an auditory startle test. In horses, tasks demanding attention may be useful as a biomarker of pain. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. All rights reserved.

  19. Examining age-related differences in auditory attention control using a task-switching procedure.

    PubMed

    Lawo, Vera; Koch, Iring

    2014-03-01

    Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.

  20. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  1. Development of visual category selectivity in ventral visual cortex does not require visual experience

    PubMed Central

    van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.

    2017-01-01

    To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127

  2. The influence of an auditory-memory attention-demanding task on postural control in blind persons.

    PubMed

    Melzer, Itshak; Damry, Elad; Landau, Anat; Yagev, Ronit

    2011-05-01

    In order to evaluate the effect of an auditory-memory attention-demanding task on balance control, nine blind adults were compared to nine age-gender-matched sighted controls. This issue is particularly relevant for the blind population in which functional assessment of postural control has to be revealed through "real life" motor and cognitive function. The study aimed to explore whether an auditory-memory attention-demanding cognitive task would influence postural control in blind persons and compare this with blindfolded sighted persons. Subjects were instructed to minimize body sway during narrow base upright standing on a single force platform under two conditions: 1) standing still (single task); 2) as in 1) while performing an auditory-memory attention-demanding cognitive task (dual task). Subjects in both groups were required to stand blindfolded with their eyes closed. Center of Pressure displacement data were collected and analyzed using summary statistics and stabilogram-diffusion analysis. Blind and sighted subjects had similar postural sway in eyes closed condition. However, for dual compared to single task, sighted subjects show significant decrease in postural sway while blind subjects did not. The auditory-memory attention-demanding cognitive task had no interference effect on balance control on blind subjects. It seems that sighted individuals used auditory cues to compensate for momentary loss of vision, whereas blind subjects did not. This may suggest that blind and sighted people use different sensorimotor strategies to achieve stability. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Using spoken words to guide open-ended category formation.

    PubMed

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.

  4. Auditory training improves auditory performance in cochlear implanted children.

    PubMed

    Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel

    2016-07-01

    While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. On the influence of typicality and age of acquisition on semantic processing: Diverging evidence from behavioural and ERP responses.

    PubMed

    Räling, Romy; Holzgrefe-Lang, Julia; Schröder, Astrid; Wartenburger, Isabell

    2015-08-01

    Various behavioural studies show that semantic typicality (TYP) and age of acquisition (AOA) of a specific word influence processing time and accuracy during the performance of lexical-semantic tasks. This study examines the influence of TYP and AOA on semantic processing at behavioural (response times and accuracy data) and electrophysiological levels using an auditory category-member-verification task. Reaction time data reveal independent TYP and AOA effects, while in the accuracy data and the event-related potentials predominantly effects of TYP can be found. The present study thus confirms previous findings and extends evidence found in the visual modality to the auditory modality. A modality-independent influence on semantic word processing is manifested. However, with regard to the influence of AOA, the diverging results raise questions on the origin of AOA effects as well as on the interpretation of offline and online data. Hence, results will be discussed against the background of recent theories on N400 correlates in semantic processing. In addition, an argument in favour of a complementary use of research techniques will be made. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Effect of attentional load on audiovisual speech perception: evidence from ERPs

    PubMed Central

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922

  7. Supporting interruption management and multimodal interface design: three meta-analyses of task performance as a function of interrupting task modality.

    PubMed

    Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia

    2013-08-01

    The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.

  8. Effect of a concurrent auditory task on visual search performance in a driving-related image-flicker task.

    PubMed

    Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John

    2002-01-01

    The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.

  9. Coding of auditory temporal and pitch information by hippocampal individual cells and cell assemblies in the rat.

    PubMed

    Sakurai, Y

    2002-01-01

    This study reports how hippocampal individual cells and cell assemblies cooperate for neural coding of pitch and temporal information in memory processes for auditory stimuli. Each rat performed two tasks, one requiring discrimination of auditory pitch (high or low) and the other requiring discrimination of their duration (long or short). Some CA1 and CA3 complex-spike neurons showed task-related differential activity between the high and low tones in only the pitch-discrimination task. However, without exception, neurons which showed task-related differential activity between the long and short tones in the duration-discrimination task were always task-related neurons in the pitch-discrimination task. These results suggest that temporal information (long or short), in contrast to pitch information (high or low), cannot be coded independently by specific neurons. The results also indicate that the two different behavioral tasks cannot be fully differentiated by the task-related single neurons alone and suggest a model of cell-assembly coding of the tasks. Cross-correlation analysis among activities of simultaneously recorded multiple neurons supported the suggested cell-assembly model.Considering those results, this study concludes that dual coding by hippocampal single neurons and cell assemblies is working in memory processing of pitch and temporal information of auditory stimuli. The single neurons encode both auditory pitches and their temporal lengths and the cell assemblies encode types of tasks (contexts or situations) in which the pitch and the temporal information are processed.

  10. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  11. The relationship of phonological ability, speech perception, and auditory perception in adults with dyslexia

    PubMed Central

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan

    2014-01-01

    This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences. PMID:25071512

  12. Basic Auditory Processing Skills and Phonological Awareness in Low-IQ Readers and Typically Developing Controls

    ERIC Educational Resources Information Center

    Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha

    2011-01-01

    We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…

  13. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  14. Adding sound to theory of mind: Comparing children's development of mental-state understanding in the auditory and visual realms.

    PubMed

    Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L

    2017-12-01

    Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. The Effect of Noise on the Relationship Between Auditory Working Memory and Comprehension in School-Age Children.

    PubMed

    Sullivan, Jessica R; Osman, Homira; Schafer, Erin C

    2015-06-01

    The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.

  16. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.

    PubMed

    Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker

    2016-01-01

    Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.

  17. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction

    PubMed Central

    Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker

    2016-01-01

    Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network. PMID:27242485

  18. Children Use Object-Level Category Knowledge to Detect Changes in Complex Auditory Scenes

    ERIC Educational Resources Information Center

    Vanden Bosch der Nederlanden, Christina M.; Snyder, Joel S.; Hannon, Erin E.

    2016-01-01

    Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a…

  19. Psycho acoustical Measures in Individuals with Congenital Visual Impairment.

    PubMed

    Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh

    2017-12-01

    In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.

  20. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    PubMed Central

    Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. Method Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. Results CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. Conclusion CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal. PMID:28395319

  1. The Chronometry of Mental Ability: An Event-Related Potential Analysis of an Auditory Oddball Discrimination Task

    ERIC Educational Resources Information Center

    Beauchamp, Chris M.; Stelmack, Robert M.

    2006-01-01

    The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…

  2. Task- and Talker-Specific Gains in Auditory Training

    ERIC Educational Resources Information Center

    Barcroft, Joe; Spehar, Brent; Tye-Murray, Nancy; Sommers, Mitchell

    2016-01-01

    Purpose: This investigation focused on generalization of outcomes for auditory training by examining the effects of task and/or talker overlap between training and at test. Method: Adults with hearing loss completed 12 hr of meaning-oriented auditory training and were placed in a group that trained on either multiple talkers or a single talker. A…

  3. Impact of Auditory Selective Attention on Verbal Short-Term Memory and Vocabulary Development

    ERIC Educational Resources Information Center

    Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial

    2009-01-01

    This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing…

  4. Performance of Cerebral Palsied Children under Conditions of Reduced Auditory Input on Selected Intellectual, Cognitive and Perceptual Tasks.

    ERIC Educational Resources Information Center

    Fassler, Joan

    The study investigated the task performance of cerebral palsied children under conditions of reduced auditory input and under normal auditory conditions. A non-cerebral palsied group was studied in a similar manner. Results indicated that cerebral palsied children showed some positive change in performance, under conditions of reduced auditory…

  5. Auditory Confrontation Naming in Alzheimer’s Disease

    PubMed Central

    Brandt, Jason; Bakker, Arnold; Maroof, David Aaron

    2010-01-01

    Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630

  6. Decoding Multiple Sound Categories in the Human Temporal Cortex Using High Resolution fMRI

    PubMed Central

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C. M.

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases. PMID:25692885

  7. Decoding multiple sound categories in the human temporal cortex using high resolution fMRI.

    PubMed

    Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C M

    2015-01-01

    Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.

  8. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.

    PubMed

    Häkkinen, Suvi; Rinne, Teemu

    2018-06-01

    A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.

  9. Auditory processing deficits are sometimes necessary and sometimes sufficient for language difficulties in children: Evidence from mild to moderate sensorineural hearing loss.

    PubMed

    Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart

    2017-09-01

    There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  11. Stimulus Expectancy Modulates Inferior Frontal Gyrus and Premotor Cortex Activity in Auditory Perception

    ERIC Educational Resources Information Center

    Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten

    2012-01-01

    In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as…

  12. Auditory access, language access, and implicit sequence learning in deaf children.

    PubMed

    Hall, Matthew L; Eigsti, Inge-Marie; Bortfeld, Heather; Lillo-Martin, Diane

    2018-05-01

    Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]. © 2017 John Wiley & Sons Ltd.

  13. Meaning in the avian auditory cortex: Neural representation of communication calls

    PubMed Central

    Elie, Julie E; Theunissen, Frédéric E

    2014-01-01

    Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing almost the entire zebra finch vocal repertoire and organized communication calls along 9 different categories based on their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 anesthetized zebra finches. To analyze how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained using an optimal decoder based both on spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex and emphasize the value of a neuro-ethological approach to understand vocal communication. PMID:25728175

  14. The role of Broca's area in speech perception: evidence from aphasia revisited.

    PubMed

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-12-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.

  15. Emergent selectivity for task-relevant stimuli in higher-order auditory cortex

    PubMed Central

    Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.

    2014-01-01

    A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467

  16. Left ventral occipitotemporal activation during orthographic and semantic processing of auditory words.

    PubMed

    Ludersdorfer, Philipp; Wimmer, Heinz; Richlan, Fabio; Schurz, Matthias; Hutzler, Florian; Kronbichler, Martin

    2016-01-01

    The present fMRI study investigated the hypothesis that activation of the left ventral occipitotemporal cortex (vOT) in response to auditory words can be attributed to lexical orthographic rather than lexico-semantic processing. To this end, we presented auditory words in both an orthographic ("three or four letter word?") and a semantic ("living or nonliving?") task. In addition, a auditory control condition presented tones in a pitch evaluation task. The results showed that the left vOT exhibited higher activation for orthographic relative to semantic processing of auditory words with a peak in the posterior part of vOT. Comparisons to the auditory control condition revealed that orthographic processing of auditory words elicited activation in a large vOT cluster. In contrast, activation for semantic processing was only weak and restricted to the middle part vOT. We interpret our findings as speaking for orthographic processing in left vOT. In particular, we suggest that activation in left middle vOT can be attributed to accessing orthographic whole-word representations. While activation of such representations was experimentally ascertained in the orthographic task, it might have also occurred automatically in the semantic task. Activation in the more posterior vOT region, on the other hand, may reflect the generation of explicit images of word-specific letter sequences required by the orthographic but not the semantic task. In addition, based on cross-modal suppression, the finding of marked deactivations in response to the auditory tones is taken to reflect the visual nature of representations and processes in left vOT. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Seeing tones and hearing rectangles - Attending to simultaneous auditory and visual events

    NASA Technical Reports Server (NTRS)

    Casper, Patricia A.; Kantowitz, Barry H.

    1985-01-01

    The allocation of attention in dual-task situations depends on both the overall and the momentary demands associated with both tasks. Subjects in an inclusive- or reaction-time task responded to changes in simultaneous sequences of discrete auditory and visual stimuli. Performance on individual trials was affected by (1) the ratio of stimuli in the two tasks, (2) response demands of the two tasks, and (3) patterns inherent in the demands of one task.

  18. Tai Chi practitioners have better postural control and selective attention in stepping down with and without a concurrent auditory response task.

    PubMed

    Lu, Xi; Siu, Ka-Chun; Fu, Siu N; Hui-Chan, Christina W Y; Tsang, William W N

    2013-08-01

    To compare the performance of older experienced Tai Chi practitioners and healthy controls in dual-task versus single-task paradigms, namely stepping down with and without performing an auditory response task, a cross-sectional study was conducted in the Center for East-meets-West in Rehabilitation Sciences at The Hong Kong Polytechnic University, Hong Kong. Twenty-eight Tai Chi practitioners (73.6 ± 4.2 years) and 30 healthy control subjects (72.4 ± 6.1 years) were recruited. Participants were asked to step down from a 19-cm-high platform and maintain a single-leg stance for 10 s with and without a concurrent cognitive task. The cognitive task was an auditory Stroop test in which the participants were required to respond to different tones of voices regardless of their word meanings. Postural stability after stepping down under single- and dual-task paradigms, in terms of excursion of the subject's center of pressure (COP) and cognitive performance, was measured for comparison between the two groups. Our findings demonstrated significant between-group differences in more outcome measures during dual-task than single-task performance. Thus, the auditory Stroop test showed that Tai Chi practitioners achieved not only significantly less error rate in single-task, but also significantly faster reaction time in dual-task, when compared with healthy controls similar in age and other relevant demographics. Similarly, the stepping-down task showed that Tai Chi practitioners not only displayed significantly less COP sway area in single-task, but also significantly less COP sway path than healthy controls in dual-task. These results showed that Tai Chi practitioners achieved better postural stability after stepping down as well as better performance in auditory response task than healthy controls. The improved performance that was magnified by dual motor-cognitive task performance may point to the benefits of Tai Chi being a mind-and-body exercise.

  19. An eye movement analysis of the effect of interruption modality on primary task resumption.

    PubMed

    Ratwani, Raj; Trafton, J Gregory

    2010-06-01

    We examined the effect of interruption modality (visual or auditory) on primary task (visual) resumption to determine which modality was the least disruptive. Theories examining interruption modality have focused on specific periods of the interruption timeline. Preemption theory has focused on the switch from the primary task to the interrupting task. Multiple resource theory has focused on interrupting tasks that are to be performed concurrently with the primary task. Our focus was on examining how interruption modality influences task resumption.We leverage the memory-for-goals theory, which suggests that maintaining an associative link between environmental cues and the suspended primary task goal is important for resumption. Three interruption modality conditions were examined: auditory interruption with the primary task visible, auditory interruption with a blank screen occluding the primary task, and a visual interruption occluding the primary task. Reaction time and eye movement data were collected. The auditory condition with the primary task visible was the least disruptive. Eye movement data suggest that participants in this condition were actively maintaining an associative link between relevant environmental cues on the primary task interface and the suspended primary task goal during the interruption. These data suggest that maintaining cue association is the important factor for reducing the disruptiveness of interruptions, not interruption modality. Interruption-prone computing environments should be designed to allow for the user to have access to relevant primary task cues during an interruption to minimize disruptiveness.

  20. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    PubMed

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  1. Effect of explicit dimension instruction on speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Yi, Han-Gyol; Smayda, Kirsten E.; Maddox, W. Todd

    2015-01-01

    Learning non-native speech categories is often considered a challenging task in adulthood. This difficulty is driven by cross-language differences in weighting critical auditory dimensions that differentiate speech categories. For example, previous studies have shown that differentiating Mandarin tonal categories requires attending to dimensions related to pitch height and direction. Relative to native speakers of Mandarin, the pitch direction dimension is under-weighted by native English speakers. In the current study, we examined the effect of explicit instructions (dimension instruction) on native English speakers' Mandarin tone category learning within the framework of a dual-learning systems (DLS) model. This model predicts that successful speech category learning is initially mediated by an explicit, reflective learning system that frequently utilizes unidimensional rules, with an eventual switch to a more implicit, reflexive learning system that utilizes multidimensional rules. Participants were explicitly instructed to focus and/or ignore the pitch height dimension, the pitch direction dimension, or were given no explicit prime. Our results show that instruction instructing participants to focus on pitch direction, and instruction diverting attention away from pitch height resulted in enhanced tone categorization. Computational modeling of participant responses suggested that instruction related to pitch direction led to faster and more frequent use of multidimensional reflexive strategies, and enhanced perceptual selectivity along the previously underweighted pitch direction dimension. PMID:26542400

  2. Auditory perception modulated by word reading.

    PubMed

    Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja

    2016-10-01

    Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.

  3. Reaction time and accuracy in individuals with aphasia during auditory vigilance tasks.

    PubMed

    Laures, Jacqueline S

    2005-11-01

    Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory vigilance tasks. Findings indicate that the aphasic group was less accurate during both tasks than the control group, but was not slower in their accurate responses. Further examination of the data revealed variability in the aphasic participants' RT contributing to the lower accuracy scores.

  4. Reading strategies of Chinese students with severe to profound hearing loss.

    PubMed

    Cheung, Ka Yan; Leung, Man Tak; McPherson, Bradley

    2013-01-01

    The present study investigated the significance of auditory discrimination and the use of phonological and orthographic codes during the course of reading development in Chinese students who are deaf or hard of hearing (D/HH). In this study, the reading behaviors of D/HH students in 2 tasks-a task on auditory perception of onset rime and a synonym decision task-were compared with those of their chronological age-matched and reading level (RL)-matched controls. Cross-group comparison of the performances of participants in the task on auditory perception suggests that poor auditory discrimination ability may be a possible cause of reading problems for D/HH students. In addition, results of the synonym decision task reveal that D/HH students with poor reading ability demonstrate a significantly greater preference for orthographic rather than phonological information, when compared with the D/HH students with good reading ability and their RL-matched controls. Implications for future studies and educational planning are discussed.

  5. Category specific dysnomia after thalamic infarction: a case-control study.

    PubMed

    Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli

    2005-01-01

    Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.

  6. Toward a dual-learning systems model of speech category learning

    PubMed Central

    Chandrasekaran, Bharath; Koslov, Seth R.; Maddox, W. T.

    2014-01-01

    More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions. PMID:25132827

  7. Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?

    ERIC Educational Resources Information Center

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…

  8. An Experimental Analysis of Memory Processing

    PubMed Central

    Wright, Anthony A

    2007-01-01

    Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230

  9. Cognitive Control of Auditory Distraction: Impact of Task Difficulty, Foreknowledge, and Working Memory Capacity Supports Duplex-Mechanism Account

    ERIC Educational Resources Information Center

    Hughes, Robert W.; Hurlstone, Mark J.; Marsh, John E.; Vachon, Francois; Jones, Dylan M.

    2013-01-01

    The influence of top-down cognitive control on 2 putatively distinct forms of distraction was investigated. Attentional capture by a task-irrelevant auditory deviation (e.g., a female-spoken token following a sequence of male-spoken tokens)--as indexed by its disruption of a visually presented recall task--was abolished when focal-task engagement…

  10. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  11. Performance and strategy comparisons of human listeners and logistic regression in discriminating underwater targets.

    PubMed

    Yang, Lixue; Chen, Kean

    2015-11-01

    To improve the design of underwater target recognition systems based on auditory perception, this study compared human listeners with automatic classifiers. Performances measures and strategies in three discrimination experiments, including discriminations between man-made and natural targets, between ships and submarines, and among three types of ships, were used. In the experiments, the subjects were asked to assign a score to each sound based on how confident they were about the category to which it belonged, and logistic regression, which represents linear discriminative models, also completed three similar tasks by utilizing many auditory features. The results indicated that the performances of logistic regression improved as the ratio between inter- and intra-class differences became larger, whereas the performances of the human subjects were limited by their unfamiliarity with the targets. Logistic regression performed better than the human subjects in all tasks but the discrimination between man-made and natural targets, and the strategies employed by excellent human subjects were similar to that of logistic regression. Logistic regression and several human subjects demonstrated similar performances when discriminating man-made and natural targets, but in this case, their strategies were not similar. An appropriate fusion of their strategies led to further improvement in recognition accuracy.

  12. A neurally inspired musical instrument classification system based upon the sound onset.

    PubMed

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  13. Quantifying auditory handicap. A new approach.

    PubMed

    Jerger, S; Jerger, J

    1979-01-01

    This report describes a new audiovisual test procedure for the quantification of auditory handicap (QUAH). The QUAH test attempts to recreate in the laboratory a series of everyday listening situations. Individual test items represent psychomotor tasks. Data on 53 normal-hearing listeners described performance as a function of the message-to-competition ratio (MCR). Results indicated that, for further studies, an MCR of 0 dB represents the condition above which the task seemed too easy and below which the task appeared too difficult for normal-hearing subjects. The QUAH approach to the measurement of auditory handicap seems promising as an experimental tool. Further studies are needed to describe the relation of QUAH results (1) to clinical audiologic measures and (2) to more traditional indices of auditory handicap.

  14. Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.

    PubMed

    Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie

    2018-05-23

    The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.

  15. Auditory Cortex Is Required for Fear Potentiation of Gap Detection

    PubMed Central

    Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.

    2014-01-01

    Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510

  16. Validation of auditory detection response task method for assessing the attentional effects of cognitive load.

    PubMed

    Stojmenova, Kristina; Sodnik, Jaka

    2018-07-04

    There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.

  17. Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.

    PubMed

    Murphy, Cristina F B; Moore, David R; Schochat, Eliane

    2015-01-01

    Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research is required to investigate the effects of various stimuli and lengths of training on the generalization of sensory and cognitive learning to literacy skills.

  18. Reduced sensitivity of the N400 and late positive component to semantic congruity and word repetition in left temporal lobe epilepsy.

    PubMed

    Olichney, John M; Riggins, Brock R; Hillert, Dieter G; Nowacki, Ralph; Tecoma, Evelyn; Kutas, Marta; Iragui, Vicente J

    2002-07-01

    We studied 14 patients with well-characterized refractory temporal lobe epilepsy (TLE), 7 with right temporal lobe epilepsy (RTE) and 7 with left temporal lobe epilepsy (LTE), on a word repetition ERP experiment. Much prior literature supports the view that patients with left TLE are more likely to develop verbal memory deficits, often attributable to left hippocampal sclerosis. Our main objectives were to test if abnormalities of the N400 or Late Positive Component (LPC, P600) were associated with a left temporal seizure focus, or left temporal lobe dysfunction. A minimum of 19 channels of EEG/EOG data were collected while subjects performed a semantic categorization task. Auditory category statements were followed by a visual target word, which were 50% "congruous" (category exemplars) and 50% "incongruous" (non-category exemplars) with the preceding semantic context. These auditory-visual pairings were repeated pseudo-randomly at time intervals ranging from approximately 10-140 seconds later. The ERP data were submitted to repeated-measures ANOVAs, which showed the RTE group had generally normal effects of word repetition on the LPC and the N400. Also, the N400 component was larger to incongruous than congruous new words, as is normally the case. In contrast, the LTE group did not have statistically significant effects of either word repetition or congruity on their ERPs (N400 or LPC), suggesting that this ERP semantic categorization paradigm is sensitive to left temporal lobe dysfunction. Further studies are ongoing to determine if these ERP abnormalities predict hippocampal sclerosis on histopathology, or outcome after anterior temporal lobectomy.

  19. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    PubMed

    Lavigne, Katie M; Woodward, Todd S

    2018-04-01

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  20. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  1. Absolute pitch: evidence for early cognitive facilitation during passive listening as revealed by reduced P3a amplitudes.

    PubMed

    Rogenmoser, Lars; Elmer, Stefan; Jäncke, Lutz

    2015-03-01

    Absolute pitch (AP) is the rare ability to identify or produce different pitches without using reference tones. At least two sequential processing stages are assumed to contribute to this phenomenon. The first recruits a pitch memory mechanism at an early stage of auditory processing, whereas the second is driven by a later cognitive mechanism (pitch labeling). Several investigations have used active tasks, but it is unclear how these two mechanisms contribute to AP during passive listening. The present work investigated the temporal dynamics of tone processing in AP and non-AP (NAP) participants by using EEG. We applied a passive oddball paradigm with between- and within-tone category manipulations and analyzed the MMN reflecting the early stage of auditory processing and the P3a response reflecting the later cognitive mechanism during the second processing stage. Results did not reveal between-group differences in MMN waveforms. By contrast, the P3a response was specifically associated with AP and sensitive to the processing of different pitch types. Specifically, AP participants exhibited smaller P3a amplitudes, especially in between-tone category conditions, and P3a responses correlated significantly with the age of commencement of musical training, suggesting an influence of early musical exposure on AP. Our results reinforce the current opinion that the representation of pitches at the processing level of the auditory-related cortex is comparable among AP and NAP participants, whereas the later processing stage is critical for AP. Results are interpreted as reflecting cognitive facilitation in AP participants, possibly driven by the availability of multiple codes for tones.

  2. Familiarity with a vocal category biases the compartmental expression of Arc/Arg3.1 in core auditory cortex.

    PubMed

    Ivanova, Tamara N; Gross, Christina; Mappus, Rudolph C; Kwon, Yong Jun; Bassell, Gary J; Liu, Robert C

    2017-12-01

    Learning to recognize a stimulus category requires experience with its many natural variations. However, the mechanisms that allow a category's sensorineural representation to be updated after experiencing new exemplars are not well understood, particularly at the molecular level. Here we investigate how a natural vocal category induces expression in the auditory system of a key synaptic plasticity effector immediate early gene, Arc/Arg3.1 , which is required for memory consolidation. We use the ultrasonic communication system between mouse pups and adult females to study whether prior familiarity with pup vocalizations alters how Arc is engaged in the core auditory cortex after playback of novel exemplars from the pup vocal category. A computerized, 3D surface-assisted cellular compartmental analysis, validated against manual cell counts, demonstrates significant changes in the recruitment of neurons expressing Arc in pup-experienced animals (mothers and virgin females "cocaring" for pups) compared with pup-inexperienced animals (pup-naïve virgins), especially when listening to more familiar, natural calls compared to less familiar but similarly recognized tonal model calls. Our data support the hypothesis that the kinetics of Arc induction to refine cortical representations of sensory categories is sensitive to the familiarity of the sensory experience. © 2017 Ivanova et al.; Published by Cold Spring Harbor Laboratory Press.

  3. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. [Cognitive dysfunction in patients with subclinical hypothyroidism].

    PubMed

    Fernandes, Robertta Soares Miranda; Alvarenga, Nathália Bueno; Silva, Tamara Inácio da; Rocha, Felipe Filardi da

    2011-04-01

    Evaluate neuropsychological changes in patients with subclinical hypothyroidism (SH). Cross-sectional study comparing the results of the neuropsychological evaluation of 89 SH patients and 178 individuals without thyroid disease. The participants underwent the following neuropsychological assessment: Conner's Continuous Performance Test (CPT-II), Iowa Gambling Task, Stroop Test, Wisconsin Card Sorting Test (WCST), Verbal Fluency Test (semantic and phonologic categories) and Rey Auditory Verbal Learning Test. Among the neuropsychological tests, patients showed worse performance only in cognitive flexibility (WCST) and the ability to maintain sustained attention (omission errors on the CPT-II). These losses can cause detriments in the daily lives of patients, constituting potential treatment indications.

  5. Auditory evoked fields to vocalization during passive listening and active generation in adults who stutter.

    PubMed

    Beal, Deryk S; Cheyne, Douglas O; Gracco, Vincent L; Quraan, Maher A; Taylor, Margot J; De Nil, Luc F

    2010-10-01

    We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  7. The Cognitive Side of M1

    PubMed Central

    Tomasino, Barbara; Gremese, Michele

    2016-01-01

    The primary motor cortex (M1) is traditionally implicated in voluntary movement control. In order to test the hypothesis that there is a functional topography of M1 activation in studies where it has been implicated in higher cognitive tasks we performed activation-likelihood-estimation (ALE) meta-analyses of functional neuroimaging experiments reporting M1 activation in relation to six cognitive functional categories for which there was a sufficient number of studies to include, namely motor imagery, working memory, mental rotation, social/emotion/empathy, language, and auditory processing. The six categories activated different sub-sectors of M1, either bilaterally or lateralized to one hemisphere. Notably, the activations found in the M1 of the left or right hemisphere detected in our study were unlikely due to button presses. In fact, all contrasts were selected in order to eliminate M1 activation due to activity related to the finger button press. In addition, we identified the M1 sub-region of Area 4a commonly activated by 4/6 categories, namely motor imagery and working memory, emotion/empathy, and language. Overall, our findings lend support to the idea that there is a functional topography of M1 activation in studies where it has been found activated in higher cognitive tasks and that the left Area 4a can be involved in a number of cognitive processes, likely as a product of implicit mental simulation processing. PMID:27378891

  8. Alpha power indexes task-related networks on large and small scales: A multimodal ECoG study in humans and a non-human primate.

    PubMed

    de Pesters, A; Coon, W G; Brunner, P; Gunduz, A; Ritaccio, A L; Brunet, N M; de Weerd, P; Roberts, M J; Oostenveld, R; Fries, P; Schalk, G

    2016-07-01

    Performing different tasks, such as generating motor movements or processing sensory input, requires the recruitment of specific networks of neuronal populations. Previous studies suggested that power variations in the alpha band (8-12Hz) may implement such recruitment of task-specific populations by increasing cortical excitability in task-related areas while inhibiting population-level cortical activity in task-unrelated areas (Klimesch et al., 2007; Jensen and Mazaheri, 2010). However, the precise temporal and spatial relationships between the modulatory function implemented by alpha oscillations and population-level cortical activity remained undefined. Furthermore, while several studies suggested that alpha power indexes task-related populations across large and spatially separated cortical areas, it was largely unclear whether alpha power also differentially indexes smaller networks of task-related neuronal populations. Here we addressed these questions by investigating the temporal and spatial relationships of electrocorticographic (ECoG) power modulations in the alpha band and in the broadband gamma range (70-170Hz, indexing population-level activity) during auditory and motor tasks in five human subjects and one macaque monkey. In line with previous research, our results confirm that broadband gamma power accurately tracks task-related behavior and that alpha power decreases in task-related areas. More importantly, they demonstrate that alpha power suppression lags population-level activity in auditory areas during the auditory task, but precedes it in motor areas during the motor task. This suppression of alpha power in task-related areas was accompanied by an increase in areas not related to the task. In addition, we show for the first time that these differential modulations of alpha power could be observed not only across widely distributed systems (e.g., motor vs. auditory system), but also within the auditory system. Specifically, alpha power was suppressed in the locations within the auditory system that most robustly responded to particular sound stimuli. Altogether, our results provide experimental evidence for a mechanism that preferentially recruits task-related neuronal populations by increasing cortical excitability in task-related cortical areas and decreasing cortical excitability in task-unrelated areas. This mechanism is implemented by variations in alpha power and is common to humans and the non-human primate under study. These results contribute to an increasingly refined understanding of the mechanisms underlying the selection of the specific neuronal populations required for task execution. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Frequency encoded auditory display of the critical tracking task

    NASA Technical Reports Server (NTRS)

    Stevenson, J.

    1984-01-01

    The use of auditory displays for selected cockpit instruments was examined. In auditory, visual, and combined auditory-visual compensatory displays of a vertical axis, critical tracking task were studied. The visual display encoded vertical error as the position of a dot on a 17.78 cm, center marked CRT. The auditory display encoded vertical error as log frequency with a six octave range; the center point at 1 kHz was marked by a 20-dB amplitude notch, one-third octave wide. Asymptotic performance on the critical tracking task was significantly better when using combined displays rather than the visual only mode. At asymptote, the combined display was slightly, but significantly, better than the visual only mode. The maximum controllable bandwidth using the auditory mode was only 60% of the maximum controllable bandwidth using the visual mode. Redundant cueing increased the rate of improvement of tracking performance, and the asymptotic performance level. This enhancement increases with the amount of redundant cueing used. This effect appears most prominent when the bandwidth of the forcing function is substantially less than the upper limit of controllability frequency.

  10. Auditory false perception in schizophrenia: Development and validation of auditory signal detection task.

    PubMed

    Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan

    2016-12-01

    Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  12. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  13. Design of Training Systems, Phase II-A Report. An Educational Technology Assessment Model (ETAM)

    DTIC Science & Technology

    1975-07-01

    34format" for the perceptual tasks. This is applicable to auditory as well as visual tasks. Student Participation in Learning Route. When a student enters...skill formats Skill training 05.05 Vehicle properties Instructional functions: Type of stimulus presented to student visual auditory ...Subtask 05.05. For example, a trainer to identify and interpret auditory signals would not be represented in the above list. Trainers in the vehicle

  14. Narrative abilities, memory and attention in children with a specific language impairment.

    PubMed

    Duinmeijer, Iris; de Jong, Jan; Scheper, Annette

    2012-01-01

    While narrative tasks have proven to be valid measures for detecting language disorders, measuring communicative skills and predicting future academic performance, research into the comparability of different narrative tasks has shown that outcomes are dependent on the type of task used. Although many of the studies detecting task differences touch upon the fact that tasks place differential demands on cognitive abilities like auditory attention and memory, few studies have related specific narrative tasks to these cognitive abilities. Examining this relation is especially warranted for children with specific language impairment (SLI), who are characterized by language problems, but often have problems in other cognitive domains as well. In the current research, a comparison was made between a story retelling task (The Bus Story) and a story generation task (The Frog Story) in a group of children with SLI (n= 34) and a typically developing group (n= 38) from the same age range. In addition to the two narrative tasks, sustained auditory attention (TEA-Ch) and verbal working memory (WISC digit span and the Dutch version of the CVLT-C word list recall) were measured. Correlations were computed between the narrative, the memory and the attention scores. A group comparison showed that the children with SLI scored significantly worse than the typically developing children on several narrative measures as well as on sustained auditory attention and verbal working memory. A within-subjects comparison of the scores on the two narrative tasks showed a contrast between the tasks on several narrative measures. Furthermore, correlational analyses showed that, on the level of plot structure, the story generation task correlated with sustained auditory attention, while the story retelling task correlated with word list recall. Mean length of utterance (MLU) on the other hand correlated with digit span but not with sustained auditory attention. While children with SLI have problems with narratives in general, their performance is also dependent on the specific elicitation task used for research or diagnostics. Various narrative tasks generate different scores and are differentially correlated to cognitive skills like attention and memory, making the selection of a given task crucial in the clinical setting. © 2012 Royal College of Speech and Language Therapists.

  15. Poor Auditory Task Scores in Children with Specific Reading and Language Difficulties: Some Poor Scores Are More Equal than Others

    ERIC Educational Resources Information Center

    McArthur, Genevieve M.; Hogben, John H.

    2012-01-01

    Children with specific reading disability (SRD) or specific language impairment (SLI), who scored poorly on an auditory discrimination task, did up to 140 runs on the failed task. Forty-one percent of the children produced widely fluctuating scores that did not improve across runs (untrainable errant performance), 23% produced widely fluctuating…

  16. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    PubMed

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  17. Genetic Pleiotropy Explains Associations between Musical Auditory Discrimination and Intelligence

    PubMed Central

    Mosing, Miriam A.; Pedersen, Nancy L.; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions. PMID:25419664

  18. Neural Correlates of Selective Attention With Hearing Aid Use Followed by ReadMyQuips Auditory Training Program.

    PubMed

    Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey

    The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task.

  19. The effect of L1 orthography on non-native vowel perception.

    PubMed

    Escudero, Paola; Wanrooij, Karin

    2010-01-01

    Previous research has shown that orthography influences the learning and processing of spoken non-native words. In this paper, we examine the effect of L1 orthography on non-native sound perception. In Experiment 1, 204 Spanish learners of Dutch and a control group of 20 native speakers of Dutch were asked to classify Dutch vowel tokens by choosing from auditorily presented options, in one task, and from the orthographic representations of Dutch vowels, in a second task. The results show that vowel categorization varied across tasks: the most difficult vowels in the purely auditory task were the easiest in the orthographic task and, conversely, vowels with a relatively high success rate in the purely auditory task were poorly classified in the orthographic task. The results of Experiment 2 with 22 monolingual Peruvian Spanish listeners replicated the main results of Experiment 1 and confirmed the existence of orthographic effects. Together, the two experiments show that when listening to auditory stimuli only, native speakers of Spanish have great difficulty classifying certain Dutch vowels, regardless of the amount of experience they may have with the Dutch language. Importantly, the pairing of auditory stimuli with orthographic labels can help or hinder Spanish listeners' sound categorization, depending on the specific sound contrast.

  20. Early but not late-blindness leads to enhanced auditory perception.

    PubMed

    Wan, Catherine Y; Wood, Amanda G; Reutens, David C; Wilson, Sarah J

    2010-01-01

    The notion that blindness leads to superior non-visual abilities has been postulated for centuries. Compared to sighted individuals, blind individuals show different patterns of brain activation when performing auditory tasks. To date, no study has controlled for musical experience, which is known to influence auditory skills. The present study tested 33 blind (11 congenital, 11 early-blind, 11 late-blind) participants and 33 matched sighted controls. We showed that the performance of blind participants was better than that of sighted participants on a range of auditory perception tasks, even when musical experience was controlled for. This advantage was observed only for individuals who became blind early in life, and was even more pronounced for individuals who were blind from birth. Years of blindness did not predict task performance. Here, we provide compelling evidence that superior auditory abilities in blind individuals are not explained by musical experience alone. These results have implications for the development of sensory substitution devices, particularly for late-blind individuals.

  1. Visual attention modulates brain activation to angry voices.

    PubMed

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  2. Auditory processing efficiency deficits in children with developmental language impairments

    NASA Astrophysics Data System (ADS)

    Hartley, Douglas E. H.; Moore, David R.

    2002-12-01

    The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.

  3. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  4. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind.

    PubMed

    Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir

    2012-11-08

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. A Behavioral Study of Distraction by Vibrotactile Novelty

    ERIC Educational Resources Information Center

    Parmentier, Fabrice B. R.; Ljungberg, Jessica K.; Elsley, Jane V.; Lindkvist, Markus

    2011-01-01

    Past research has demonstrated that the occurrence of unexpected task-irrelevant changes in the auditory or visual sensory channels captured attention in an obligatory fashion, hindering behavioral performance in ongoing auditory or visual categorization tasks and generating orientation and re-orientation electrophysiological responses. We report…

  6. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    PubMed

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  7. A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

    PubMed

    Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H

    2018-05-02

    A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Rhythm synchronization performance and auditory working memory in early- and late-trained musicians.

    PubMed

    Bailey, Jennifer A; Penhune, Virginia B

    2010-07-01

    Behavioural and neuroimaging studies provide evidence for a possible "sensitive" period in childhood development during which musical training results in long-lasting changes in brain structure and auditory and motor performance. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 (early-trained; ET) perform better on a visuomotor task than those who begin after the age of 7 (late-trained; LT), even when matched on total years of musical training and experience. Two questions were raised regarding the findings from this experiment. First, would this group performance difference be observed using a more familiar, musically relevant task such as auditory rhythms? Second, would cognitive abilities mediate this difference in task performance? To address these questions, ET and LT musicians, matched on years of musical training, hours of current practice and experience, were tested on an auditory rhythm synchronization task. The task consisted of six woodblock rhythms of varying levels of metrical complexity. In addition, participants were tested on cognitive subtests measuring vocabulary, working memory and pattern recognition. The two groups of musicians differed in their performance of the rhythm task, such that the ET musicians were better at reproducing the temporal structure of the rhythms. There were no group differences on the cognitive measures. Interestingly, across both groups, individual task performance correlated with auditory working memory abilities and years of formal training. These results support the idea of a sensitive period during the early years of childhood for developing sensorimotor synchronization abilities via musical training.

  9. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  10. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  11. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  12. Speaker variability augments phonological processing in early word learning

    PubMed Central

    Rost, Gwyneth C.; McMurray, Bob

    2010-01-01

    Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e., word pairs that differ by a single phoneme), despite the ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them. PMID:19143806

  13. Auditory reafferences: the influence of real-time feedback on movement control.

    PubMed

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  14. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  15. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  16. Beyond the real world: attention debates in auditory mismatch negativity.

    PubMed

    Chung, Kyungmi; Park, Jin Young

    2018-04-11

    The aim of this study was to address the potential for the auditory mismatch negativity (aMMN) to be used in applied event-related potential (ERP) studies by determining whether the aMMN would be an attention-dependent ERP component and could be differently modulated across visual tasks or virtual reality (VR) stimuli with different visual properties and visual complexity levels. A total of 80 participants, aged 19-36 years, were assigned to either a reading-task (21 men and 19 women) or a VR-task (22 men and 18 women) group. Two visual-task groups of healthy young adults were matched in age, sex, and handedness. All participants were instructed to focus only on the given visual tasks and ignore auditory change detection. While participants in the reading-task group read text slides, those in the VR-task group viewed three 360° VR videos in a random order and rated how visually complex the given virtual environment was immediately after each VR video ended. Inconsistent with the finding of a partial significant difference in perceived visual complexity in terms of brightness of virtual environments, both visual properties of distance and brightness showed no significant differences in the modulation of aMMN amplitudes. A further analysis was carried out to compare elicited aMMN amplitudes of a typical MMN task and an applied VR task. No significant difference in the aMMN amplitudes was found across the two groups who completed visual tasks with different visual-task demands. In conclusion, the aMMN is a reliable ERP marker of preattentive cognitive processing for auditory deviance detection.

  17. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  18. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  19. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  20. Effect of the cognitive-motor dual-task using auditory cue on balance of surviviors with chronic stroke: a pilot study.

    PubMed

    Choi, Wonjae; Lee, GyuChang; Lee, Seungwon

    2015-08-01

    To investigate the effect of a cognitive-motor dual-task using auditory cues on the balance of patients with chronic stroke. Randomized controlled trial. Inpatient rehabilitation center. Thirty-seven individuals with chronic stroke. The participants were randomly allocated to the dual-task group (n=19) and the single-task group (n=18). The dual-task group performed a cognitive-motor dual-task in which they carried a circular ring from side to side according to a random auditory cue during treadmill walking. The single-task group walked on a treadmill only. All subjects completed 15 min per session, three times per week, for four weeks with conventional rehabilitation five times per week over the four weeks. Before and after intervention, both static and dynamic balance were measured with a force platform and using the Timed Up and Go (TUG) test. The dual-task group showed significant improvement in all variables compared to the single-task group, except for anteroposterior (AP) sway velocity with eyes open and TUG at follow-up: mediolateral (ML) sway velocity with eye open (dual-task group vs. single-task group: 2.11 mm/s vs. 0.38 mm/s), ML sway velocity with eye close (2.91 mm/s vs. 1.35 mm/s), AP sway velocity with eye close (4.84 mm/s vs. 3.12 mm/s). After intervention, all variables showed significant improvement in the dual-task group compared to baseline. The study results suggest that the performance of a cognitive-motor dual-task using auditory cues may influence balance improvements in chronic stroke patients. © The Author(s) 2014.

  1. Minimal effects of visual memory training on the auditory performance of adult cochlear implant users

    PubMed Central

    Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie

    2014-01-01

    Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087

  2. Domain-general sequence learning deficit in specific language impairment.

    PubMed

    Lukács, Agnes; Kemény, Ferenc

    2014-05-01

    Grammar-specific accounts of specific language impairment (SLI) have been challenged by recent claims that language problems are a consequence of impairments in domain-general mechanisms of learning that also play a key role in the process of language acquisition. Our studies were designed to test the generality and nature of this learning deficit by focusing on both sequential and nonsequential, and on verbal and nonverbal, domains. Twenty-nine children with SLI were compared with age-matched typically developing (TD) control children using (a) a serial reaction time task (SRT), testing the learning of motor sequences; (b) an artificial grammar learning (AGL) task, testing the extraction of regularities from auditory sequences; and (c) a weather prediction task (WP), testing probabilistic category learning in a nonsequential task. For the 2 sequence learning tasks, a significantly smaller proportion of children showed evidence of learning in the SLI than in the TD group (χ2 tests, p < .001 for the SRT task, p < .05 for the AGL task), whereas the proportion of learners on the WP task was the same in the 2 groups. The level of learning for SLI learners was comparable with that of TD children on all tasks (with great individual variation). Taken together, these findings suggest that domain-general processes of implicit sequence learning tend to be impaired in SLI. Further research is needed to clarify the relationship of deficits in implicit learning and language.

  3. Auditory Memory for Timbre

    ERIC Educational Resources Information Center

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  4. Reducing involuntary memory by interfering consolidation of stressful auditory information: A pilot study.

    PubMed

    Tabrizi, Fara; Jansson, Billy

    2016-03-01

    Intrusive emotional memories were induced by aversive auditory stimuli and modulated with cognitive tasks performed post-encoding (i.e., during consolidation). A between-subjects design was used with four conditions; three consolidation-interference tasks (a visuospatial and two verbal interference tasks) and a no-task control condition. Forty-one participants listened to a soundtrack depicting traumatic scenes (e.g., police brutality, torture and rape). Immediately after listening to the soundtrack, the subjects completed a randomly assigned task for 10 min. Intrusions from the soundtrack were reported in a diary during the following seven-day period. In line with a modality-specific approach to intrusion modulation, auditory intrusions were reduced by verbal tasks compared to both a no-task and a visuospatial interference task.. The study did not control for individual differences in imagery ability which may be a feature in intrusion development. The results provide an increased understanding of how intrusive mental images can be modulated which may have implications for preventive treatment.. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Correlations among within-channel and between-channel auditory gap-detection thresholds in normal listeners.

    PubMed

    Phillips, Dennis P; Smith, Jennifer C

    2004-01-01

    We obtained data on within-channel and between-channel auditory temporal gap-detection acuity in the normal population. Ninety-five normal listeners were tested for gap-detection thresholds, for conditions in which the gap was bounded by spectrally identical, and by spectrally different, acoustic markers. Separate thresholds were obtained with the use of an adaptive tracking method, for gaps delimited by narrowband noise bursts centred on 1.0 kHz, noise bursts centred on 4.0 kHz, and for gaps bounded by a leading marker of 4.0 kHz noise and a trailing marker of 1.0 kHz noise. Gap thresholds were lowest for silent periods bounded by identical markers--'within-channel' stimuli. Gap thresholds were significantly longer for the between-channel stimulus--silent periods bounded by unidentical markers (p < 0.0001). Thresholds for the two within-channel tasks were highly correlated (R = 0.76). Thresholds for the between-channel stimulus were weakly correlated with thresholds for the within-channel stimuli (1.0 kHz, R = 0.39; and 4.0 kHz, R = 0.46). The relatively poor predictability of between-channel thresholds from the within-channel thresholds is new evidence on the separability of the mechanisms that mediate performance of the two tasks. The data confirm that the acuity difference for the tasks, which has previously been demonstrated in only small numbers of highly trained listeners, extends to a population of untrained listeners. The acuity of the between-channel mechanism may be relevant to the formation of voice-onset time-category boundaries in speech perception.

  6. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    PubMed Central

    Henshaw, Helen; Ferguson, Melanie A.

    2013-01-01

    Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss. PMID:23675431

  7. Working memory training in congenitally blind individuals results in an integration of occipital cortex in functional networks.

    PubMed

    Gudi-Mindermann, Helene; Rimmele, Johanna M; Nolte, Guido; Bruns, Patrick; Engel, Andreas K; Röder, Brigitte

    2018-04-12

    The functional relevance of crossmodal activation (e.g. auditory activation of occipital brain regions) in congenitally blind individuals is still not fully understood. The present study tested whether the occipital cortex of blind individuals is integrated into a challenged functional network. A working memory (WM) training over four sessions was implemented. Congenitally blind and matched sighted participants were adaptively trained with an n-back task employing either voices (auditory training) or tactile stimuli (tactile training). In addition, a minimally demanding 1-back task served as an active control condition. Power and functional connectivity of EEG activity evolving during the maintenance period of an auditory 2-back task were analyzed, run prior to and after the WM training. Modality-specific (following auditory training) and modality-independent WM training effects (following both auditory and tactile training) were assessed. Improvements in auditory WM were observed in all groups, and blind and sighted individuals did not differ in training gains. Auditory and tactile training of sighted participants led, relative to the active control group, to an increase in fronto-parietal theta-band power, suggesting a training-induced strengthening of the existing modality-independent WM network. No power effects were observed in the blind. Rather, after auditory training the blind showed a decrease in theta-band connectivity between central, parietal, and occipital electrodes compared to the blind tactile training and active control groups. Furthermore, in the blind auditory training increased beta-band connectivity between fronto-parietal, central and occipital electrodes. In the congenitally blind, these findings suggest a stronger integration of occipital areas into the auditory WM network. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Auditory short-term memory in the primate auditory cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  9. Auditory short-term memory in the primate auditory cortex

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581

  10. Auditory short-term memory activation during score reading.

    PubMed

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  11. Auditory Short-Term Memory Activation during Score Reading

    PubMed Central

    Simoens, Veerle L.; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487

  12. Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V

    2013-11-15

    Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Prevention and Treatment of Noise-Induced Tinnitus. Revision

    DTIC Science & Technology

    2013-07-01

    CTBP2 immunolabeling) for their loss following noise. Sub-Task 1c: Assessment of Auditory Nerve ( VGLUT1 immunolabel) terminals on neurons in Ventral...and Dorsal Cochlear Nucleus (VCN, DCN) for their loss following noise. Sub-Task 1d: Assessment of VGLUT2 , VAT & VGAT immunolabeled terminals in VCN...significant reduction in connections compared to animals without noise exposure. Sub-Task 1c: Assessment of Auditory Nerve ( VGLUT1 immunolabel

  14. Relation between brain activation and lexical performance.

    PubMed

    Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel

    2003-07-01

    Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.

  15. The role of working memory in auditory selective attention.

    PubMed

    Dalton, Polly; Santangelo, Valerio; Spence, Charles

    2009-11-01

    A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.

  16. Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.

    PubMed

    Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel

    2008-07-01

    The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  17. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Statistical learning and auditory processing in children with music training: An ERP study.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  19. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  20. Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters

    PubMed Central

    Mittag, Maria; Inauri, Karina; Huovilainen, Tatu; Leminen, Miika; Salo, Emma; Rinne, Teemu; Kujala, Teija; Alho, Kimmo

    2013-01-01

    We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables. PMID:24348324

  1. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    PubMed

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  2. Learning to listen again: the role of compliance in auditory training for adults with hearing loss.

    PubMed

    Chisolm, Theresa Hnath; Saunders, Gabrielle H; Frederick, Melissa T; McArdle, Rachel A; Smith, Sherri L; Wilson, Richard H

    2013-12-01

    To examine the role of compliance in the outcomes of computer-based auditory training with the Listening and Communication Enhancement (LACE) program in Veterans using hearing aids. The authors examined available LACE training data for 5 tasks (i.e., speech-in-babble, time compression, competing speaker, auditory memory, missing word) from 50 hearing-aid users who participated in a larger, randomized controlled trial designed to examine the efficacy of LACE training. The goals were to determine: (a) whether there were changes in performance over 20 training sessions on trained tasks (i.e., on-task outcomes); and (b) whether compliance, defined as completing all 20 sessions, vs. noncompliance, defined as completing less than 20 sessions, influenced performance on parallel untrained tasks (i.e., off-task outcomes). The majority, 84% of participants, completed 20 sessions, with maximum outcome occurring with at least 10 sessions of training for some tasks and up to 20 sessions of training for others. Comparison of baseline to posttest performance revealed statistically significant improvements for 4 of 7 off-task outcome measures for the compliant group, with at least small (0.2 < d < 0.3) Cohen's d effect sizes for 3 of the 4. There were no statistically significant improvements observed for the noncompliant group. The high level of compliance in the present study may be attributable to use of systematized verbal and written instructions with telephone follow-up. Compliance, as expected, appears important for optimizing the outcomes of auditory training. Methods to improve compliance in clinical populations need to be developed, and compliance data are important to report in future studies of auditory training.

  3. Changes in otoacoustic emissions during selective auditory and visual attention

    PubMed Central

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2015-01-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703

  4. Brain blood flow changes measured by positron emission tomography during an auditory cognitive task in healthy volunteers and in schizophrenic patients.

    PubMed

    Emri, Miklós; Glaub, Teodóra; Berecz, Roland; Lengyel, Zsolt; Mikecz, Pál; Repa, Imre; Bartók, Eniko; Degrell, István; Trón, Lajos

    2006-05-01

    Cognitive deficit is an essential feature of schizophrenia. One of the generally used simple cognitive tasks to characterize specific cognitive dysfunctions is the auditory "oddball" paradigm. During this task, two different tones are presented with different repetition frequencies and the subject is asked to pay attention and to respond to the less frequent tone. The aim of the present study was to apply positron emission tomography (PET) to measure the regional brain blood flow changes induced by an auditory oddball task in healthy volunteers and in stable schizophrenic patients in order to detect activation differences between the two groups. Eight healthy volunteers and 11 schizophrenic patients were studied. The subjects carried out a specific auditory oddball task, while cerebral activation measured via the regional distribution of [15O]-butanol activity changes in the PET camera was recorded. Task-related activation differed significantly across the patients and controls. The healthy volunteers displayed significant activation in the anterior cingulate area (Brodman Area - BA32), while in the schizophrenic patients the area was wider, including the mediofrontal regions (BA32 and BA10). The distance between the locations of maximal activation of the two populations were 33 mm and the cluster size was about twice as large in the patient group. The present results demonstrate that the perfusion changes induced in the schizophrenic patients by this cognitive task extends over a larger part of the mediofrontal cortex than in the healthy volunteers. The different pattern of activation observed during the auditory oddball task in the schizophrenic patients suggests that a larger cortical area - and consequently a larger variety of neuronal networks--is involved in the cognitive processes in these patients. The dispersion of stimulus processing during a cognitive task requiring sustained attention and stimulus discrimination may play an important role in the pathomechanism of the disorder.

  5. Effects of laterality and pitch height of an auditory accessory stimulus on horizontal response selection: the Simon effect and the SMARC effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2009-08-01

    In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.

  6. Auditory and Visual Sustained Attention in Children with Speech Sound Disorder

    PubMed Central

    Murphy, Cristina F. B.; Pagan-Neves, Luciana O.; Wertzner, Haydée F.; Schochat, Eliane

    2014-01-01

    Although research has demonstrated that children with specific language impairment (SLI) and reading disorder (RD) exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD). Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD) and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11±1.231) and 37 typically developing children (8.76±1.461)) were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications. PMID:24675815

  7. Postcategorical auditory distraction in short-term memory: Insights from increased task load and task type.

    PubMed

    Marsh, John E; Yang, Jingqi; Qualter, Pamela; Richardson, Cassandra; Perham, Nick; Vachon, François; Hughes, Robert W

    2018-06-01

    Task-irrelevant speech impairs short-term serial recall appreciably. On the interference-by-process account, the processing of physical (i.e., precategorical) changes in speech yields order cues that conflict with the serial-ordering process deployed to perform the serial recall task. In this view, the postcategorical properties (e.g., phonology, meaning) of speech play no role. The present study reassessed the implications of recent demonstrations of auditory postcategorical distraction in serial recall that have been taken as support for an alternative, attentional-diversion, account of the irrelevant speech effect. Focusing on the disruptive effect of emotionally valent compared with neutral words on serial recall, we show that the distracter-valence effect is eliminated under conditions-high task-encoding load-thought to shield against attentional diversion whereas the general effect of speech (neutral words compared with quiet) remains unaffected (Experiment 1). Furthermore, the distracter-valence effect generalizes to a task that does not require the processing of serial order-the missing-item task-whereas the effect of speech per se is attenuated in this task (Experiment 2). We conclude that postcategorical auditory distraction phenomena in serial short-term memory (STM) are incidental: they are observable in such a setting but, unlike the acoustically driven irrelevant speech effect, are not integral to it. As such, the findings support a duplex-mechanism account over a unitary view of auditory distraction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Auditory post-processing in a passive listening task is deficient in Alzheimer's disease.

    PubMed

    Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine

    2014-01-01

    To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Effect of Perceptual Load on Semantic Access by Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Herve

    2013-01-01

    Purpose: To examine whether semantic access by speech requires attention in children. Method: Children ("N" = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual- dynamic face) picture word task. The cross-modal task had a low load,…

  10. Late Maturation of Auditory Perceptual Learning

    ERIC Educational Resources Information Center

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  11. Medial Auditory Thalamus Is Necessary for Acquisition and Retention of Eyeblink Conditioning to Cochlear Nucleus Stimulation

    ERIC Educational Resources Information Center

    Halverson, Hunter E.; Poremba, Amy; Freeman, John H.

    2015-01-01

    Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…

  12. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  13. Transient human auditory cortex activation during volitional attention shifting

    PubMed Central

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110

  14. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity

    PubMed Central

    Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models. PMID:27536226

  15. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity.

    PubMed

    Simon, Sharon S; Tusch, Erich S; Holcomb, Phillip J; Daffner, Kirk R

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  16. Familiarity with a Vocal Category Biases the Compartmental Expression of "Arc/Arg3.1" in Core Auditory Cortex

    ERIC Educational Resources Information Center

    Ivanova, Tamara N.; Gross, Christina; Mappus, Rudolph C.; Kwon, Yong Jun; Bassell, Gary J.; Liu, Robert C.

    2017-01-01

    Learning to recognize a stimulus category requires experience with its many natural variations. However, the mechanisms that allow a category's sensorineural representation to be updated after experiencing new exemplars are not well understood, particularly at the molecular level. Here we investigate how a natural vocal category induces expression…

  17. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    PubMed Central

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  18. Applicability of central auditory processing disorder models.

    PubMed

    Jutras, Benoît; Loubert, Monique; Dupuis, Jean-Luc; Marcoux, Caroline; Dumont, Véronique; Baril, Michèle

    2007-12-01

    Central auditory processing disorder ([C]APD) is a relatively recent construct that has given rise to 2 theoretical models: the Buffalo Model and the Bellis/Ferre Model. These models describe 4 and 5 (C)APD categories, respectively. The present study examines the applicability of these models to clinical practice. Neither of these models was based on data from peer-reviewed sources. This is a retrospective study that reviewed 178 records of children diagnosed with (C)APD, of which 48 were retained for analysis. More than 80% of the children could be classified into one of the Buffalo Model categories, while more than 90% remained unclassified under the Bellis/Ferre Model. This discrepancy can be explained by the fact that the classification of the Buffalo Model is based primarily on a single central auditory test (Staggered Spondaic Word), whereas the Bellis/Ferre Model classification uses a combination of auditory test results. The 2 models provide a conceptual framework for (C)APD, but they must be further refined to be fully applicable in clinical settings.

  19. Auditory Pitch Perception in Autism Spectrum Disorder Is Associated With Nonverbal Abilities.

    PubMed

    Chowdhury, Rakhee; Sharda, Megha; Foster, Nicholas E V; Germain, Esther; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L

    2017-11-01

    Atypical sensory perception and heterogeneous cognitive profiles are common features of autism spectrum disorder (ASD). However, previous findings on auditory sensory processing in ASD are mixed. Accordingly, auditory perception and its relation to cognitive abilities in ASD remain poorly understood. Here, children with ASD, and age- and intelligence quotient (IQ)-matched typically developing children, were tested on a low- and a higher level pitch processing task. Verbal and nonverbal cognitive abilities were measured using the Wechsler's Abbreviated Scale of Intelligence. There were no group differences in performance on either auditory task or IQ measure. However, there was significant variability in performance on the auditory tasks in both groups that was predicted by nonverbal, not verbal skills. These results suggest that auditory perception is related to nonverbal reasoning rather than verbal abilities in ASD and typically developing children. In addition, these findings provide evidence for preserved pitch processing in school-age children with ASD with average IQ, supporting the idea that there may be a subgroup of individuals with ASD that do not present perceptual or cognitive difficulties. Future directions involve examining whether similar perceptual-cognitive relationships might be observed in a broader sample of individuals with ASD, such as those with language impairment or lower IQ.

  20. Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.

    PubMed

    Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R

    2010-03-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.

  1. Auditory Scene Analysis: An Attention Perspective

    PubMed Central

    2017-01-01

    Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599

  2. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  3. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    PubMed

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. What Does the Right Hemisphere Know about Phoneme Categories?

    ERIC Educational Resources Information Center

    Wolmetz, Michael; Poeppel, David; Rapp, Brenda

    2011-01-01

    Innate auditory sensitivities and familiarity with the sounds of language give rise to clear influences of phonemic categories on adult perception of speech. With few exceptions, current models endorse highly left-hemisphere-lateralized mechanisms responsible for the influence of phonemic category on speech perception, based primarily on results…

  5. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    PubMed

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.

  6. Functional deficit of the medial prefrontal cortex during emotional sentence attribution in schizophrenia.

    PubMed

    Razafimandimby, Annick; Hervé, Pierre-Yves; Marzloff, Vincent; Brazo, Perrine; Tzourio-Mazoyer, Nathalie; Dollfus, Sonia

    2016-12-01

    Functional brain imaging research has already demonstrated that patients with schizophrenia had difficulties with emotion processing, namely in facial emotion perception and emotional prosody. However, the moderating effect of social context and the boundary of perceptual categories of emotion attribution remain unclear. This study investigated the neural bases of emotional sentence attribution in schizophrenia. Twenty-one schizophrenia patients and 25 healthy subjects underwent an event-related functional magnetic resonance imaging paradigm including two tasks: one to classify sentences according to their emotional content, and the other to classify neutral sentences according to their grammatical person. First, patients showed longer response times as compared to controls only during the emotion attribution task. Second, patients with schizophrenia showed reduction of activation in bilateral auditory areas irrespective of the presence of emotions. Lastly, during emotional sentences attribution, patients displayed less activation than controls in the medial prefrontal cortex (mPFC). We suggest that the functional abnormality observed in the mPFC during the emotion attribution task could provide a biological basis for social cognition deficits in patients with schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Investigating the visual span in comparative search: the effects of task difficulty and divided attention.

    PubMed

    Pomplun, M; Reingold, E M; Shen, J

    2001-09-01

    In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.

  8. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  9. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  10. Auditory conflict and congruence in frontotemporal dementia.

    PubMed

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. From Perception to Metacognition: Auditory and Olfactory Functions in Early Blind, Late Blind, and Sighted Individuals

    PubMed Central

    Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria

    2016-01-01

    Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884

  12. P300 as a measure of processing capacity in auditory and visual domains in Specific Language Impairment

    PubMed Central

    Evans, Julia L.; Pollak, Seth D.

    2011-01-01

    This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9 – 14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities. PMID:21316354

  13. Detrimental Effects of Earphone Conversation on Auditory Environmental Monitoring of Visually Impaired People

    ERIC Educational Resources Information Center

    Verstijnen, I. M.; van Mierlo, C. M.; de Ruijter, P.

    2008-01-01

    In order to investigate the effect of concurrent phoning and auditory environmental monitoring, the performance of visually impaired people was observed on a dual task that consisted of two simulation tasks. Subjects wore either a bone conducting headset, or closed or open (air conduction) earphones. Reaction times and the correctness of responses…

  14. Between- and within-Ear Congruency and Laterality Effects in an Auditory Semantic/Emotional Prosody Conflict Task

    ERIC Educational Resources Information Center

    Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.

    2009-01-01

    The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…

  15. The Development of Visual and Auditory Selective Attention Using the Central-Incidental Paradigm.

    ERIC Educational Resources Information Center

    Conroy, Robert L.; Weener, Paul

    Analogous auditory and visual central-incidental learning tasks were administered to 24 students from each of the second, fourth, and sixth grades. The visual tasks served as another modification of Hagen's central-incidental learning paradigm, with the interpretation that focal attention processes continue to develop until the age of 12 or 13…

  16. Writing Tasks and Immediate Auditory Memory in Peruvian Schoolchildren

    ERIC Educational Resources Information Center

    Ventura-León, José Luís; Caycho, Tomás

    2017-01-01

    The purpose of the study is to determine the relationship between a group of writing tasks and the immediate auditory memory, as well as to establish differences according to sex and level of study. Two hundred and three schoolchildren of fifth and sixth grade of elementary education from Lima (Peru) participated; they were selected by a…

  17. Increasing Independence in Self-Care Tasks for Children with Autism Using Self-Operated Auditory Prompts

    ERIC Educational Resources Information Center

    Mays, Nicole McGaha; Heflin, L. Juane

    2011-01-01

    This study was conducted to determine the effects of self-operated auditory prompting systems (SOAPs) on independent self-care task completion of elementary-school-aged children with autism and intellectual disabilities. Prerecorded verbal prompts on a student-operated tape recorder were employed to facilitate independence in washing hands and…

  18. Modality-specific effects on crosstalk in task switching: evidence from modality compatibility using bimodal stimulation.

    PubMed

    Stephan, Denise Nadine; Koch, Iring

    2016-11-01

    The present study was aimed at examining modality-specific influences in task switching. To this end, participants switched either between modality compatible tasks (auditory-vocal and visual-manual) or incompatible spatial discrimination tasks (auditory-manual and visual-vocal). In addition, auditory and visual stimuli were presented simultaneously (i.e., bimodally) in each trial, so that selective attention was required to process the task-relevant stimulus. The inclusion of bimodal stimuli enabled us to assess congruence effects as a converging measure of increased between-task interference. The tasks followed a pre-instructed sequence of double alternations (AABB), so that no explicit task cues were required. The results show that switching between two modality incompatible tasks increases both switch costs and congruence effects compared to switching between two modality compatible tasks. The finding of increased congruence effects in modality incompatible tasks supports our explanation in terms of ideomotor "backward" linkages between anticipated response effects and the stimuli that called for this response in the first place. According to this generalized ideomotor idea, the modality match between response effects and stimuli would prime selection of a response in the compatible modality. This priming would cause increased difficulties to ignore the competing stimulus and hence increases the congruence effect. Moreover, performance would be hindered when switching between modality incompatible tasks and facilitated when switching between modality compatible tasks.

  19. Auditory Discrimination Learning: Role of Working Memory.

    PubMed

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  20. Neural substrates related to auditory working memory comparisons in dyslexia: An fMRI study

    PubMed Central

    CONWAY, TIM; HEILMAN, KENNETH M.; GOPINATH, KAUNDINYA; PECK, KYUNG; BAUER, RUSSELL; BRIGGS, RICHARD W.; TORGESEN, JOSEPH K.; CROSSON, BRUCE

    2010-01-01

    Adult readers with developmental phonological dyslexia exhibit significant difficulty comparing pseudowords and pure tones in auditory working memory (AWM). This suggests deficient AWM skills for adults diagnosed with dyslexia. Despite behavioral differences, it is unknown whether neural substrates of AWM differ between adults diagnosed with dyslexia and normal readers. Prior neuroimaging of adults diagnosed with dyslexia and normal readers, and post-mortem findings of neural structural anomalies in adults diagnosed with dyslexia support the hypothesis of atypical neural activity in temporoparietal and inferior frontal regions during AWM tasks in adults diagnosed with dyslexia. We used fMRI during two binaural AWM tasks (pseudowords or pure tones comparisons) in adults diagnosed with dyslexia (n = 11) and normal readers (n = 11). For both AWM tasks, adults diagnosed with dyslexia exhibited greater activity in left posterior superior temporal (BA 22) and inferior parietal regions (BA 40) than normal readers. Comparing neural activity between groups and between stimuli contrasts (pseudowords vs. tones), adults diagnosed with dyslexia showed greater primary auditory cortex activity (BA 42; tones > pseudowords) than normal readers. Thus, greater activity in primary auditory, posterior superior temporal, and inferior parietal cortices during linguistic and non-linguistic AWM tasks for adults diagnosed with dyslexia compared to normal readers indicate differences in neural substrates of AWM comparison tasks. PMID:18577292

  1. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Auditory Discrimination Learning: Role of Working Memory

    PubMed Central

    Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  4. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling.

    PubMed

    Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J

    2014-01-01

    Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic-semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory-semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.

  5. Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.

    PubMed

    Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C

    1995-01-01

    Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.

  6. Negative mental imagery in public speaking anxiety: Forming cognitive resistance by taxing visuospatial working memory.

    PubMed

    Homer, Sophie R; Deeprose, Catherine; Andrade, Jackie

    2016-03-01

    This study sought to reconcile two lines of research. Previous studies have identified a prevalent and causal role of negative imagery in social phobia and public speaking anxiety; others have demonstrated that lateral eye movements during visualisation of imagery reduce its vividness, most likely by loading the visuospatial sketchpad of working memory. It was hypothesised that using eye movements to reduce the intensity of negative imagery associated with public speaking may reduce anxiety resulting from imagining a public speaking scenario compared to an auditory control task. Forty undergraduate students scoring high in anxiety on the Personal Report of Confidence as a Speaker scale took part. A semi-structured interview established an image that represented the participant's public speaking anxiety, which was then visualised during an eye movement task or a matched auditory task. Reactions to imagining a hypothetical but realistic public speaking scenario were measured. As hypothesised, representative imagery was established and reduced in vividness more effectively by the eye movement task than the auditory task. The public speaking scenario was then visualised less vividly and generated less anxiety when imagined after performing the eye movement task than after the auditory task. Self-report measures and a hypothetical scenario rather than actual public speaking were used. Replication is required in larger as well as clinical samples. Visuospatial working memory tasks may preferentially reduce anxiety associated with personal images of feared events, and thus provide cognitive resistance which reduces emotional reactions to imagined, and potentially real-life future stressful experiences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Auditory models for speech analysis

    NASA Astrophysics Data System (ADS)

    Maybury, Mark T.

    This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.

  8. Effect of dual task activity on reaction time in males and females.

    PubMed

    Kaur, Manjinder; Nagpal, Sangeeta; Singh, Harpreet; Suhalka, M L

    2014-01-01

    The present study was designed to compare the auditory and visual reaction time on an Audiovisual Reaction Time Machine with the concomitant use of mobile phones in 52 women and 30 men in the age group of 18-40 years. Males showed significantly (p < 0.05) shorter reaction times, both auditory and visual, than females both during single task and multi task performance. But the percentage increase from their respective baseline auditory reaction times, was more in men than women during multitasking, in hand held (24.38% & 18.70% respectively) and hands free modes (36.40% & 18.40% respectively) of the use of cell phone. VRT increased non significantly during multitasking in both the groups. However, the multitasking per se has detrimental effect on the reaction times in both the groups studied. Hence, it should best be avoided in crucial and high attention demanding tasks like driving.

  9. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  10. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    PubMed

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  11. Comparing Monotic and Diotic Selective Auditory Attention Abilities in Children

    ERIC Educational Resources Information Center

    Cherry, Rochelle; Rubinstein, Adrienne

    2006-01-01

    Purpose: Some researchers have assessed ear-specific performance of auditory processing ability using speech recognition tasks with normative data based on diotic administration. The present study investigated whether monotic and diotic administrations yield similar results using the Selective Auditory Attention Test. Method: Seventy-two typically…

  12. Speech target modulates speaking induced suppression in auditory cortex

    PubMed Central

    Ventura, Maria I; Nagarajan, Srikantan S; Houde, John F

    2009-01-01

    Background Previous magnetoencephalography (MEG) studies have demonstrated speaking-induced suppression (SIS) in the auditory cortex during vocalization tasks wherein the M100 response to a subject's own speaking is reduced compared to the response when they hear playback of their speech. Results The present MEG study investigated the effects of utterance rapidity and complexity on SIS: The greatest difference between speak and listen M100 amplitudes (i.e., most SIS) was found in the simple speech task. As the utterances became more rapid and complex, SIS was significantly reduced (p = 0.0003). Conclusion These findings are highly consistent with our model of how auditory feedback is processed during speaking, where incoming feedback is compared with an efference-copy derived prediction of expected feedback. Thus, the results provide further insights about how speech motor output is controlled, as well as the computational role of auditory cortex in transforming auditory feedback. PMID:19523234

  13. Sensory modality, temperament, and the development of sustained attention: a vigilance study in children and adults.

    PubMed

    Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah

    2007-05-01

    Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament questionnaires (S. C. McDevitt & W. B. Carey, 1995), the authors classified participants according to temperament composites of reactivity and task orientation. In a preliminary study, tasks were equated across age and modality using d' matching procedures. In the main experiment, 48 children and 48 adults performed these calibrated tasks. The auditory task proved more difficult for both children and adults. Intermodal relations changed with age: Performance across modality was significantly correlated for children but not for adults. Although temperament did not significantly predict performance in adults, it did for children. The temperament effects observed in children--specifically in those with the composite of reactivity--occurred in connection with the auditory task and in a manner consistent with theoretical predictions derived from optimal stimulation theory. Copyright (c) 2007 APA, all rights reserved.

  14. Peeling the Onion of Auditory Processing Disorder: A Language/Curricular-Based Perspective

    ERIC Educational Resources Information Center

    Wallach, Geraldine P.

    2011-01-01

    Purpose: This article addresses auditory processing disorder (APD) from a language-based perspective. The author asks speech-language pathologists to evaluate the functionality (or not) of APD as a diagnostic category for children and adolescents with language-learning and academic difficulties. Suggestions are offered from a…

  15. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    NASA Astrophysics Data System (ADS)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  16. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Bouncing Ball with a Uniformly Varying Velocity in a Metronome Synchronization Task.

    PubMed

    Huang, Yingyu; Gu, Li; Yang, Junkai; Wu, Xiang

    2017-09-21

    Sensorimotor synchronization (SMS), a fundamental human ability to coordinate movements with external rhythms, has long been thought to be modality specific. In the canonical metronome synchronization task that requires tapping a finger along with an isochronous sequence, a well-established finding is that synchronization is much more stable to an auditory sequence consisting of auditory tones than to a visual sequence consisting of visual flashes. However, recent studies have shown that periodically moving visual stimuli can substantially improve synchronization compared with visual flashes. In particular, synchronization of a visual bouncing ball that has a uniformly varying velocity was found to be not less stable than synchronization of auditory tones. Here, the current protocol describes the application of the bouncing ball with a uniformly varying velocity in a metronome synchronization task. The usage of the bouncing ball in sequences with different inter-onset intervals (IOI) is included. The representative results illustrate synchronization performance of the bouncing ball, as compared with the performances of auditory tones and visual flashes. Given its comparable synchronization performance to that of auditory tones, the bouncing ball is of particular importance for addressing the current research topic of whether modality-specific mechanisms underlay SMS.

  18. Auditory global-local processing: effects of attention and musical experience.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Hyde, Krista L

    2012-10-01

    In vision, global (whole) features are typically processed before local (detail) features ("global precedence effect"). However, the distinction between global and local processing is less clear in the auditory domain. The aims of the present study were to investigate: (i) the effects of directed versus divided attention, and (ii) the effect musical training on auditory global-local processing in 16 adult musicians and 16 non-musicians. Participants were presented with short nine-tone melodies, each comprised of three triplet sequences (three-tone units). In a "directed attention" task, participants were asked to focus on either the global or local pitch pattern and had to determine if the pitch pattern went up or down. In a "divided attention" task, participants judged whether the target pattern (up or down) was present or absent. Overall, global structure was perceived faster and more accurately than local structure. The global precedence effect was observed regardless of whether attention was directed to a specific level or divided between levels. Musicians performed more accurately than non-musicians overall, but non-musicians showed a more pronounced global advantage. This study provides evidence for an auditory global precedence effect across attention tasks, and for differences in auditory global-local processing associated with musical experience.

  19. Performance on Auditory and Visual Tasks of Inhibition in English Monolingual and Spanish-English Bilingual Adults: Do Bilinguals Have a Cognitive Advantage?

    ERIC Educational Resources Information Center

    Desjardins, Jamie L.; Fernandez, Francisco

    2018-01-01

    Purpose: Bilingual individuals have been shown to be more proficient on visual tasks of inhibition compared with their monolingual counterparts. However, the bilingual advantage has not been evidenced in all studies, and very little is known regarding how bilingualism influences inhibitory control in the perception of auditory information. The…

  20. Hand Movement Deviations in a Visual Search Task with Cross Modal Cuing

    ERIC Educational Resources Information Center

    Aslan, Asli; Aslan, Hurol

    2007-01-01

    The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants'…

  1. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  2. Influence of sleep deprivation and auditory intensity on reaction time and response force.

    PubMed

    Włodarczyk, Dariusz; Jaśkowski, Piotr; Nowik, Agnieszka

    2002-06-01

    Arousal and activation are two variables supposed to underlie change in response force. This study was undertaken to explain these roles, specifically, for strong auditory stimuli and sleep deficit. Loud auditory stimuli can evoke phasic overarousal whereas sleep deficit leads to general underarousal. Moreover, Van der Molen and Keuss (1979, 1981) showed that paradoxically long reaction times occurred with extremely strong auditory stimuli when the task was difficult, e.g., choice reaction or Simon paradigm. It was argued that this paradoxical behavior related to reaction time is due to active disconnecting of the coupling between arousal and activation to prevent false responses. If so, we predicted that for extremely loud stimuli and for difficult tasks, the lengthening of reaction time should be associated with reduction of response force. The effects of loudness and sleep deficit on response time and force were investigated in three different tasks: simple response, choice response, and Simon paradigm. According to our expectation, we found a detrimental effect of sleep deficit on reaction time and on response force. In contrast to Van der Molen and Keuss, we found no increase in reaction time for loud stimuli (up to 110 dB) even on the Simon task.

  3. Attention to emotion: auditory-evoked potentials in an emotional choice reaction task and personality traits as assessed by the NEO FFI.

    PubMed

    Mittermeier, Verena; Leicht, Gregor; Karch, Susanne; Hegerl, Ulrich; Möller, Hans-Jürgen; Pogarell, Oliver; Mulert, Christoph

    2011-03-01

    Several studies suggest that attention to emotional content is related to specific changes in central information processing. In particular, event-related potential (ERP) studies focusing on emotion recognition in pictures and faces or word processing have pointed toward a distinct component of the visual-evoked potential, the EPN ('early posterior negativity'), which has been shown to be related to attention to emotional content. In the present study, we were interested in the existence of a corresponding ERP component in the auditory modality and a possible relationship with the personality dimension extraversion-introversion, as assessed by the NEO Five-Factors Inventory. We investigated 29 healthy subjects using three types of auditory choice tasks: (1) the distinction of syllables with emotional intonation, (2) the identification of the emotional content of adjectives and (3) a purely cognitive control task. Compared with the cognitive control task, emotional paradigms using auditory stimuli evoked an EPN component with a distinct peak after 170 ms (EPN 170). Interestingly, subjects with high scores in the personality trait extraversion showed significantly higher EPN amplitudes for emotional paradigms (syllables and words) than introverted subjects.

  4. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  5. Sensory and motoric influences on attention dynamics during standing balance recovery in young and older adults.

    PubMed

    Redfern, Mark S; Chambers, April J; Jennings, J Richard; Furman, Joseph M

    2017-08-01

    This study investigated the impact of attention on the sensory and motor actions during postural recovery from underfoot perturbations in young and older adults. A dual-task paradigm was used involving disjunctive and choice reaction time (RT) tasks to auditory and visual stimuli at different delays from the onset of two types of platform perturbations (rotations and translations). The RTs were increased prior to the perturbation (preparation phase) and during the immediate recovery response (response initiation) in young and older adults, but this interference dissipated rapidly after the perturbation response was initiated (<220 ms). The sensory modality of the RT task impacted the results with interference being greater for the auditory task compared to the visual task. As motor complexity of the RT task increased (disjunctive versus choice) there was greater interference from the perturbation. Finally, increasing the complexity of the postural perturbation by mixing the rotational and translational perturbations together increased interference for the auditory RT tasks, but did not affect the visual RT responses. These results suggest that sensory and motoric components of postural control are under the influence of different dynamic attentional processes.

  6. Interval timing in children: effects of auditory and visual pacing stimuli and relationships with reading and attention variables.

    PubMed

    Birkett, Emma E; Talcott, Joel B

    2012-01-01

    Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.

  7. Processing of pitch and location in human auditory cortex during visual and auditory tasks.

    PubMed

    Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu

    2015-01-01

    The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.

  8. Processing of pitch and location in human auditory cortex during visual and auditory tasks

    PubMed Central

    Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu

    2015-01-01

    The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185

  9. Effects of emotionally charged auditory stimulation on gait performance in the elderly: a preliminary study.

    PubMed

    Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe

    2015-04-01

    To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5-10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  10. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  11. Acute physical exercise affected processing efficiency in an auditory attention task more than processing effectiveness.

    PubMed

    Dutke, Stephan; Jaitner, Thomas; Berse, Timo; Barenberg, Jonathan

    2014-02-01

    Research on effects of acute physical exercise on performance in a concurrent cognitive task has generated equivocal evidence. Processing efficiency theory predicts that concurrent physical exercise can increase resource requirements for sustaining cognitive performance even when the level of performance is unaffected. This hypothesis was tested in a dual-task experiment. Sixty young adults worked on a primary auditory attention task and a secondary interval production task while cycling on a bicycle ergometer. Physical load (cycling) and cognitive load of the primary task were manipulated. Neither physical nor cognitive load affected primary task performance, but both factors interacted on secondary task performance. Sustaining primary task performance under increased physical and/or cognitive load increased resource consumption as indicated by decreased secondary task performance. Results demonstrated that physical exercise effects on cognition might be underestimated when only single task performance is the focus.

  12. Evaluation of selected auditory tests in school-age children suspected of auditory processing disorders.

    PubMed

    Vanniasegaram, Iyngaram; Cohen, Mazal; Rosen, Stuart

    2004-12-01

    To compare the auditory function of normal-hearing children attending mainstream schools who were referred for an auditory evaluation because of listening/hearing problems (suspected auditory processing disorders [susAPD]) with that of normal-hearing control children. Sixty-five children with a normal standard audiometric evaluation, ages 6-14 yr (32 of whom were referred for susAPD, with the rest age-matched control children), completed a battery of four auditory tests: a dichotic test of competing sentences; a simple discrimination of short tone pairs differing in fundamental frequency at varying interstimulus intervals (TDT); a discrimination task using consonant cluster minimal pairs of real words (CCMP), and an adaptive threshold task for detecting a brief tone presented either simultaneously with a masker (simultaneous masking) or immediately preceding it (backward masking). Regression analyses, including age as a covariate, were performed to determine the extent to which the performance of the two groups differed on each task. Age-corrected z-scores were calculated to evaluate the effectiveness of the complete battery in discriminating the groups. The performance of the susAPD group was significantly poorer than the control group on all but the masking tasks, which failed to differentiate the two groups. The CCMP discriminated the groups most effectively, as it yielded the lowest number of control children with abnormal scores, and performance in both groups was independent of age. By contrast, the proportion of control children who performed poorly on the competing sentences test was unacceptably high. Together, the CCMP (verbal) and TDT (nonverbal) tasks detected impaired listening skills in 56% of the children who were referred to the clinic, compared with 6% of the control children. Performance on the two tasks was not correlated. Two of the four tests evaluated, the CCMP and TDT, proved effective in differentiating the two groups of children of this study. The application of both tests increased the proportion of susAPD children who performed poorly compared with the application of each test alone, while reducing the proportion of control subjects who performed poorly. The findings highlight the importance of carrying out a complete auditory evaluation in children referred for medical attention, even if their standard audiometric evaluation is unremarkable.

  13. Semantic control and modality: an input processing deficit in aphasia leading to deregulated semantic cognition in a single modality.

    PubMed

    Thompson, Hannah E; Jefferies, Elizabeth

    2013-08-01

    Research suggests that semantic memory deficits can occur in at least three ways. Patients can (1) show amodal degradation of concepts within the semantic store itself, such as in semantic dementia (SD), (2) have difficulty in controlling activation within the semantic system and accessing appropriate knowledge in line with current goals or context, as in semantic aphasia (SA) and (3) experience a semantic deficit in only one modality following degraded input from sensory cortex. Patients with SA show deficits of semantic control and access across word and picture tasks, consistent with the view that their problems arise from impaired modality-general control processes. However, there are a few reports in the literature of patients with semantic access problems restricted to auditory-verbal materials, who show decreasing ability to retrieve concepts from words when they are presented repeatedly with closely related distractors. These patients challenge the notion that semantic control processes are modality-general and suggest instead a separation of 'access' to auditory-verbal and non-verbal semantic systems. We had the rare opportunity to study such a case in detail. Our aims were to examine the effect of manipulations of control demands in auditory-verbal semantic, non-verbal semantic and non-semantic tasks, allowing us to assess whether such cases always show semantic control/access impairments that follow a modality-specific pattern, or whether there are alternative explanations. Our findings revealed: (1) deficits on executive tasks, unrelated to semantic demands, which were more evident in the auditory modality than the visual modality; (2) deficits in executively-demanding semantic tasks which were accentuated in the auditory-verbal domain compared with the visual modality, but still present on non-verbal tasks, and (3) a coupling between comprehension and executive control requirements, in that mild impairment on single word comprehension was greatly increased on more demanding, associative judgements across modalities. This pattern of results suggests that mild executive-semantic impairment, paired with disrupted connectivity from auditory input, may give rise to semantic 'access' deficits affecting only the auditory modality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Short-term plasticity in auditory cognition.

    PubMed

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  15. Concurrent auditory perception difficulties in older adults with right hemisphere cerebrovascular accident.

    PubMed

    Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat

    2014-01-01

    Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.

  16. The Role of Musical Experience in Hemispheric Lateralization of Global and Local Auditory Processing.

    PubMed

    Black, Emily; Stevenson, Jennifer L; Bish, Joel P

    2017-08-01

    The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.

  17. Auditory short-term memory capacity correlates with gray matter density in the left posterior STS in cognitively normal and dyslexic adults.

    PubMed

    Richardson, Fiona M; Ramsden, Sue; Ellis, Caroline; Burnett, Stephanie; Megnin, Odette; Catmur, Caroline; Schofield, Tom M; Leff, Alex P; Price, Cathy J

    2011-12-01

    A central feature of auditory STM is its item-limited processing capacity. We investigated whether auditory STM capacity correlated with regional gray and white matter in the structural MRI images from 74 healthy adults, 40 of whom had a prior diagnosis of developmental dyslexia whereas 34 had no history of any cognitive impairment. Using whole-brain statistics, we identified a region in the left posterior STS where gray matter density was positively correlated with forward digit span, backward digit span, and performance on a "spoonerisms" task that required both auditory STM and phoneme manipulation. Across tasks and participant groups, the correlation was highly significant even when variance related to reading and auditory nonword repetition was factored out. Although the dyslexics had poorer phonological skills, the effect of auditory STM capacity in the left STS was the same as in the cognitively normal group. We also illustrate that the anatomical location of this effect is in proximity to a lesion site recently associated with reduced auditory STM capacity in patients with stroke damage. This result, therefore, indicates that gray matter density in the posterior STS predicts auditory STM capacity in the healthy and damaged brain. In conclusion, we suggest that our present findings are consistent with the view that there is an overlap between the mechanisms that support language processing and auditory STM.

  18. An initial investigation into the validity of a computer-based auditory processing assessment (Feather Squadron).

    PubMed

    Barker, Matthew D; Purdy, Suzanne C

    2016-01-01

    This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.

  19. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    ERIC Educational Resources Information Center

    Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…

  20. What Is the deficit in Phonological Processing Deficits: Auditory Sensitivity, Masking, or Category Formation?

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Shune, Samantha; Lowenstein, Joanna H.

    2011-01-01

    Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties,…

  1. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    PubMed Central

    Berti, Stefan

    2013-01-01

    Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection—which does not rely on predictive coding mechanisms—can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278

  2. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats

    PubMed Central

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872

  3. Contribution of Temporal Processing Skills to Reading Comprehension in 8-Year-Olds: Evidence for a Mediation Effect of Phonological Awareness

    ERIC Educational Resources Information Center

    Malenfant, Nathalie; Grondin, Simon; Boivin, Michel; Forget-Dubois, Nadine; Robaey, Philippe; Dionne, Ginette

    2012-01-01

    This study tested whether the association between temporal processing (TP) and reading is mediated by phonological awareness (PA) in a normative sample of 615 eight-year-olds. TP was measured with auditory and bimodal (visual-auditory) temporal order judgment tasks and PA with a phoneme deletion task. PA partially mediated the association between…

  4. Short Term Auditory Pacing Changes Dual Motor Task Coordination in Children with and without Dyslexia

    ERIC Educational Resources Information Center

    Getchell, Nancy; Mackenzie, Samuel J.; Marmon, Adam R.

    2010-01-01

    This study examined the effect of short-term auditory pacing practice on dual motor task performance in children with and without dyslexia. Groups included dyslexic with Movement Assessment Battery for Children (MABC) scores greater than 15th percentile (D_HIGH, n = 18; mean age 9.89 [plus or minus] 2.0 years), dyslexic with MABC [less than or…

  5. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256

  6. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  7. Effect of Auditory Interference on Memory of Haptic Perceptions.

    ERIC Educational Resources Information Center

    Anater, Paul F.

    1980-01-01

    The effect of auditory interference on the processing of haptic information by 61 visually impaired students (8 to 20 years old) was the focus of the research described in this article. It was assumed that as the auditory interference approximated the verbalized activity of the haptic task, accuracy of recall would decline. (Author)

  8. Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

    ERIC Educational Resources Information Center

    Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.

    2008-01-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…

  9. Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism

    ERIC Educational Resources Information Center

    Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…

  10. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  11. A psychophysiological evaluation of the perceived urgency of auditory warning signals

    NASA Technical Reports Server (NTRS)

    Burt, J. L.; Bartolome, D. S.; Burdette, D. W.; Comstock, J. R. Jr

    1995-01-01

    One significant concern that pilots have about cockpit auditory warnings is that the signals presently used lack a sense of priority. The relationship between auditory warning sound parameters and perceived urgency is, therefore, an important topic of enquiry in aviation psychology. The present investigation examined the relationship among subjective assessments of urgency, reaction time, and brainwave activity with three auditory warning signals. Subjects performed a tracking task involving automated and manual conditions, and were presented with auditory warnings having various levels of perceived and situational urgency. Subjective assessments revealed that subjects were able to rank warnings on an urgency scale, but rankings were altered after warnings were mapped to a situational urgency scale. Reaction times differed between automated and manual tracking task conditions, and physiological data showed attentional differences in response to perceived and situational warning urgency levels. This study shows that the use of physiological measures sensitive to attention and arousal, in conjunction with behavioural and subjective measures, may lead to the design of auditory warnings that produce a sense of urgency in an operator that matches the urgency of the situation.

  12. Neural Changes Associated with Nonspeech Auditory Category Learning Parallel Those of Speech Category Acquisition

    ERIC Educational Resources Information Center

    Liu, Ran; Holt, Lori L.

    2011-01-01

    Native language experience plays a critical role in shaping speech categorization, but the exact mechanisms by which it does so are not well understood. Investigating category learning of nonspeech sounds with which listeners have no prior experience allows their experience to be systematically controlled in a way that is impossible to achieve by…

  13. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    PubMed

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  14. When music is salty: The crossmodal associations between sound and taste.

    PubMed

    Guetta, Rachel; Loui, Psyche

    2017-01-01

    Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.

  15. Difference in Perseverative Errors during a Visual Attention Task with Auditory Distractors in Alpha-9 Nicotinic Receptor Subunit Wild Type and Knock-Out Mice.

    PubMed

    Jorratt, Pascal; Delano, Paul H; Delgado, Carolina; Dagnino-Subiabre, Alexies; Terreros, Gonzalo

    2017-01-01

    The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through olivocochlear (OC) neurons. Medial OC neurons make cholinergic synapses with outer hair cells (OHCs) through nicotinic receptors constituted by α9 and α10 subunits. One of the physiological functions of the α9 nicotinic receptor subunit (α9-nAChR) is the suppression of auditory distractors during selective attention to visual stimuli. In a recent study we demonstrated that the behavioral performance of alpha-9 nicotinic receptor knock-out (KO) mice is altered during selective attention to visual stimuli with auditory distractors since they made less correct responses and more omissions than wild type (WT) mice. As the inhibition of the behavioral responses to irrelevant stimuli is an important mechanism of the selective attention processes, behavioral errors are relevant measures that can reflect altered inhibitory control. Errors produced during a cued attention task can be classified as premature, target and perseverative errors. Perseverative responses can be considered as an inability to inhibit the repetition of an action already planned, while premature responses can be considered as an index of the ability to wait or retain an action. Here, we studied premature, target and perseverative errors during a visual attention task with auditory distractors in WT and KO mice. We found that α9-KO mice make fewer perseverative errors with longer latencies than WT mice in the presence of auditory distractors. In addition, although we found no significant difference in the number of target error between genotypes, KO mice made more short-latency target errors than WT mice during the presentation of auditory distractors. The fewer perseverative error made by α9-KO mice could be explained by a reduced motivation for reward and an increased impulsivity during decision making with auditory distraction in KO mice.

  16. The modulation of auditory novelty processing by working memory load in school age children and adults: a combined behavioral and event-related potential study

    PubMed Central

    2010-01-01

    Background We investigated the processing of task-irrelevant and unexpected novel sounds and its modulation by working-memory load in children aged 9-10 and in adults. Environmental sounds (novels) were embedded amongst frequently presented standard sounds in an auditory-visual distraction paradigm. Each sound was followed by a visual target. In two conditions, participants evaluated the position of a visual stimulus (0-back, low load) or compared the position of the current stimulus with the one two trials before (2-back, high load). Processing of novel sounds were measured with reaction times, hit rates and the auditory event-related brain potentials (ERPs) Mismatch Negativity (MMN), P3a, Reorienting Negativity (RON) and visual P3b. Results In both memory load conditions novels impaired task performance in adults whereas they improved performance in children. Auditory ERPs reflect age-related differences in the time-window of the MMN as children showed a positive ERP deflection to novels whereas adults lack an MMN. The attention switch towards the task irrelevant novel (reflected by P3a) was comparable between the age groups. Adults showed more efficient reallocation of attention (reflected by RON) under load condition than children. Finally, the P3b elicited by the visual target stimuli was reduced in both age groups when the preceding sound was a novel. Conclusion Our results give new insights in the development of novelty processing as they (1) reveal that task-irrelevant novel sounds can result in contrary effects on the performance in a visual primary task in children and adults, (2) show a positive ERP deflection to novels rather than an MMN in children, and (3) reveal effects of auditory novels on visual target processing. PMID:20929535

  17. Happiness increases distraction by auditory deviant stimuli.

    PubMed

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.

  18. Perceptual load interacts with stimulus processing across sensory modalities.

    PubMed

    Klemen, J; Büchel, C; Rose, M

    2009-06-01

    According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.

  19. Developmental changes in the inferior frontal cortex for selecting semantic representations

    PubMed Central

    Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li

    2012-01-01

    Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757

  20. FTAP: a Linux-based program for tapping and music experiments.

    PubMed

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  1. Relation between measures of speech-in-noise performance and measures of efferent activity

    NASA Astrophysics Data System (ADS)

    Smith, Brad; Harkrider, Ashley; Burchfield, Samuel; Nabelek, Anna

    2003-04-01

    Individual differences in auditory perceptual abilities in noise are well documented but the factors causing such variability are unclear. The purpose of this study was to determine if individual differences in responses measured from the auditory efferent system were correlated to individual variations in speech-in-noise performance. The relation between behavioral performance on three speech-in-noise tasks and two objective measures of the efferent auditory system were examined in thirty normal-hearing, young adults. Two of the speech-in-noise tasks measured an acceptable noise level, the maximum level of speech-babble noise that a subject is willing to accept while listening to a story. For these, the acceptable noise level was evaluated using both an ipsilateral (story and noise in same ear) and a contralateral (story and noise in opposite ears) paradigm. The third speech-in-noise task evaluated speech recognition using monosyllabic words presented in competing speech babble. Auditory efferent activity was assessed by examining the resulting suppression of click-evoked otoacoustic emissions following the introduction of a contralateral, broad-band stimulus and the activity of the ipsilateral and contralateral acoustic reflex arc was evaluated using tones and broad-band noise. Results will be discussed relative to current theories of speech in noise performance and auditory inhibitory processes.

  2. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.

    PubMed

    Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".

  3. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians

    PubMed Central

    Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330

  4. Interhemispheric interaction expands attentional capacity in an auditory selective attention task.

    PubMed

    Scalf, Paige E; Banich, Marie T; Erickson, Andrew B

    2009-04-01

    Previous work from our laboratory indicates that interhemispheric interaction (IHI) functionally increases the attentional capacity available to support performance on visual tasks (Banich in The asymmetrical brain, pp 261-302, 2003). Because manipulations of both computational complexity and selection demand alter the benefits of IHI to task performance, we argue that IHI may be a general strategy for meeting increases in attentional demand. Other researchers, however, have suggested that the apparent benefits of IHI to attentional capacity are an epiphenomenon of the organization of the visual system (Fecteau and Enns in Neuropsychologia 43:1412-1428, 2005; Marsolek et al. in Neuropsychologia 40:1983-1999, 2002). In the current experiment, we investigate whether IHI increases attentional capacity outside the visual system by manipulating the selection demands of an auditory temporal pattern-matching task. We find that IHI expands attentional capacity in the auditory system. This suggests that the benefits of requiring IHI derive from a functional increase in attentional capacity rather than the organization of a specific sensory modality.

  5. Word learning in deaf children with cochlear implants: effects of early auditory experience.

    PubMed

    Houston, Derek M; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T

    2012-05-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. © 2012 Blackwell Publishing Ltd.

  6. Working memory for pitch, timbre, and words

    PubMed Central

    Tillmann, Barbara

    2012-01-01

    Aiming to further our understanding of fundamental mechanisms of auditory working memory (WM), the present study compared performance for three auditory materials (words, tones, timbres). In a forward recognition task (Experiment 1) participants indicated whether the order of the items in the second sequence was the same as in the first sequence. In a backward recognition task (Experiment 2) participants indicated whether the items of the second sequence were played in the correct backward order. In Experiment 3 participants performed an articulatory suppression task during the retention delay of the backward task. To investigate potential length effects the number of items per sequence was manipulated. Overall findings underline the benefit of a cross-material experimental approach and suggest that human auditory WM is not a unitary system. Whereas WM processes for timbres differed from those for tones and words, similarities and differences were observed for words and tones: Both types of stimuli appear to rely on rehearsal mechanisms, but might differ in the involved sensorimotor codes. PMID:23116413

  7. Word learning in deaf children with cochlear implants: effects of early auditory experience

    PubMed Central

    Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.

    2013-01-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. PMID:22490184

  8. Auditory-Motor Processing of Speech Sounds

    PubMed Central

    Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.

    2013-01-01

    The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846

  9. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Visual and auditory accessory stimulus offset and the Simon effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-10-01

    We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.

  11. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings

    PubMed Central

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479

  12. Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings.

    PubMed

    Singh, Nilkamal; Telles, Shirley

    2015-01-01

    Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control.

  13. Age-Related Interference between the Selection of Input-Output Modality Mappings and Postural Control—a Pilot Study

    PubMed Central

    Stelzel, Christine; Schauenburg, Gesche; Rapp, Michael A.; Heinzel, Stephan; Granacher, Urs

    2017-01-01

    Age-related decline in executive functions and postural control due to degenerative processes in the central nervous system have been related to increased fall-risk in old age. Many studies have shown cognitive-postural dual-task interference in old adults, but research on the role of specific executive functions in this context has just begun. In this study, we addressed the question whether postural control is impaired depending on the coordination of concurrent response-selection processes related to the compatibility of input and output modality mappings as compared to impairments related to working-memory load in the comparison of cognitive dual and single tasks. Specifically, we measured total center of pressure (CoP) displacements in healthy female participants aged 19–30 and 66–84 years while they performed different versions of a spatial one-back working memory task during semi-tandem stance on an unstable surface (i.e., balance pad) while standing on a force plate. The specific working-memory tasks comprised: (i) modality compatible single tasks (i.e., visual-manual or auditory-vocal tasks), (ii) modality compatible dual tasks (i.e., visual-manual and auditory-vocal tasks), (iii) modality incompatible single tasks (i.e., visual-vocal or auditory-manual tasks), and (iv) modality incompatible dual tasks (i.e., visual-vocal and auditory-manual tasks). In addition, participants performed the same tasks while sitting. As expected from previous research, old adults showed generally impaired performance under high working-memory load (i.e., dual vs. single one-back task). In addition, modality compatibility affected one-back performance in dual-task but not in single-task conditions with strikingly pronounced impairments in old adults. Notably, the modality incompatible dual task also resulted in a selective increase in total CoP displacements compared to the modality compatible dual task in the old but not in the young participants. These results suggest that in addition to effects of working-memory load, processes related to simultaneously overcoming special linkages between input- and output modalities interfere with postural control in old but not in young female adults. Our preliminary data provide further evidence for the involvement of cognitive control processes in postural tasks. PMID:28484411

  14. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt.

    PubMed

    Hickok, Gregory; Buchsbaum, Bradley; Humphries, Colin; Muftuler, Tugan

    2003-07-01

    The concept of auditory-motor interaction pervades speech science research, yet the cortical systems supporting this interface have not been elucidated. Drawing on experimental designs used in recent work in sensory-motor integration in the cortical visual system, we used fMRI in an effort to identify human auditory regions with both sensory and motor response properties, analogous to single-unit responses in known visuomotor integration areas. The sensory phase of the task involved listening to speech (nonsense sentences) or music (novel piano melodies); the "motor" phase of the task involved covert rehearsal/humming of the auditory stimuli. A small set of areas in the superior temporal and temporal-parietal cortex responded both during the listening phase and the rehearsal/humming phase. A left lateralized region in the posterior Sylvian fissure at the parietal-temporal boundary, area Spt, showed particularly robust responses to both phases of the task. Frontal areas also showed combined auditory + rehearsal responsivity consistent with the claim that the posterior activations are part of a larger auditory-motor integration circuit. We hypothesize that this circuit plays an important role in speech development as part of the network that enables acoustic-phonetic input to guide the acquisition of language-specific articulatory-phonetic gestures; this circuit may play a role in analogous musical abilities. In the adult, this system continues to support aspects of speech production, and, we suggest, supports verbal working memory.

  15. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  16. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  17. [Children with specific language impairment: electrophysiological and pedaudiological findings].

    PubMed

    Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S

    2014-08-01

    Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Prestimulus influences on auditory perception from sensory representations and decision processes.

    PubMed

    Kayser, Stephanie J; McNair, Steven W; Kayser, Christoph

    2016-04-26

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.

  19. Prestimulus influences on auditory perception from sensory representations and decision processes

    PubMed Central

    McNair, Steven W.

    2016-01-01

    The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task. PMID:27071110

  20. Visual and auditory steady-state responses in attention-deficit/hyperactivity disorder.

    PubMed

    Khaleghi, Ali; Zarafshan, Hadi; Mohammadi, Mohammad Reza

    2018-05-22

    We designed a study to investigate the patterns of the steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) in adolescents with attention-deficit/hyperactivity disorder (ADHD) when performing a motor response inhibition task. Thirty 12- to 18-year-old adolescents with ADHD and 30 healthy control adolescents underwent an electroencephalogram (EEG) examination during steady-state stimuli when performing a stop-signal task. Then, we calculated the amplitude and phase of the steady-state responses in both visual and auditory modalities. Results showed that adolescents with ADHD had a significantly poorer performance in the stop-signal task during both visual and auditory stimuli. The SSVEP amplitude of the ADHD group was larger than that of the healthy control group in most regions of the brain, whereas the ASSR amplitude of the ADHD group was smaller than that of the healthy control group in some brain regions (e.g., right hemisphere). In conclusion, poorer task performance (especially inattention) and neurophysiological results in ADHD demonstrate a possible impairment in the interconnection of the association cortices in the parietal and temporal lobes and the prefrontal cortex. Also, the motor control problems in ADHD may arise from neural deficits in the frontoparietal and occipitoparietal systems and other brain structures such as cerebellum.

  1. Time course of the influence of musical expertise on the processing of vocal and musical sounds.

    PubMed

    Rigoulot, S; Pell, M D; Armony, J L

    2015-04-02

    Previous functional magnetic resonance imaging (fMRI) studies have suggested that different cerebral regions preferentially process human voice and music. Yet, little is known on the temporal course of the brain processes that decode the category of sounds and how the expertise in one sound category can impact these processes. To address this question, we recorded the electroencephalogram (EEG) of 15 musicians and 18 non-musicians while they were listening to short musical excerpts (piano and violin) and vocal stimuli (speech and non-linguistic vocalizations). The task of the participants was to detect noise targets embedded within the stream of sounds. Event-related potentials revealed an early differentiation of sound category, within the first 100 ms after the onset of the sound, with mostly increased responses to musical sounds. Importantly, this effect was modulated by the musical background of participants, as musicians were more responsive to music sounds than non-musicians, consistent with the notion that musical training increases sensitivity to music. In late temporal windows, brain responses were enhanced in response to vocal stimuli, but musicians were still more responsive to music. These results shed new light on the temporal course of neural dynamics of auditory processing and reveal how it is impacted by the stimulus category and the expertise of participants. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Switching in the Cocktail Party: Exploring Intentional Control of Auditory Selective Attention

    ERIC Educational Resources Information Center

    Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael

    2011-01-01

    Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…

  3. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    ERIC Educational Resources Information Center

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  4. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    ERIC Educational Resources Information Center

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  5. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.

    PubMed

    Tonelli, Alessia; Gori, Monica; Brayda, Luca

    2016-01-01

    We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.

  6. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    PubMed

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    PubMed

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate that multitasking had a significant detrimental impact on driving performance and that cognitive aging was the best predictor of the declines seen in driving performance under dual task conditions. These results have implications for use of mobile phones or in-vehicle navigational devices while driving, especially for older adults.

  8. Abnormal functional lateralization and activity of language brain areas in typical specific language impairment (developmental dysphasia)

    PubMed Central

    De Guibert, Clément; Maumet, Camille; Jannin, Pierre; Ferré, Jean-Christophe; Tréguier, Catherine; Barillot, Christian; Le Rumeur, Elisabeth; Allaire, Catherine; Biraben, Arnaud

    2011-01-01

    Atypical functional lateralization and specialization for language have been proposed to account for developmental language disorders, yet results from functional neuroimaging studies are sparse and inconsistent. This functional magnetic resonance imaging study compared children with a specific subtype of specific language impairment affecting structural language (n=21), to a matched group of typically-developing children using a panel of four language tasks neither requiring reading nor metalinguistic skills, including two auditory lexico-semantic tasks (category fluency and responsive naming) and two visual phonological tasks based on picture naming. Data processing involved normalizing the data with respect to a matched pairs pediatric template, groups and between-groups analysis, and laterality indexes assessment within regions of interest using single and combined task analysis. Children with specific language impairment exhibited a significant lack of left lateralization in all core language regions (inferior frontal gyrus-opercularis, inferior frontal gyrus-triangularis, supramarginal gyrus, superior temporal gyrus), across single or combined task analysis, but no difference of lateralization for the rest of the brain. Between-group comparisons revealed a left hypoactivation of Wernicke’s area at the posterior superior temporal/supramarginal junction during the responsive naming task, and a right hyperactivation encompassing the anterior insula with adjacent inferior frontal gyrus and the head of the caudate nucleus during the first phonological task. This study thus provides evidence that this specific subtype of specific language impairment is associated with atypical lateralization and functioning of core language areas. PMID:21719430

  9. The influence of the Japanese waving cat on the joint spatial compatibility effect: A replication and extension of Dolk, Hommel, Prinz, and Liepelt (2013).

    PubMed

    Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph

    2017-01-01

    In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.

  10. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was related to a combination of working memory and peripheral auditory factors across all listeners in the study. The results of this study are consistent with previous findings that a subset of listeners with MTBI has objective auditory deficits. Speech-in-noise performance was related to a combination of auditory and nonauditory factors, confirming the important role of audiology in MTBI rehabilitation. Further research is needed to evaluate the prevalence and causal relationship of auditory deficits following MTBI. American Academy of Audiology

  11. The effect of auditory verbal imagery on signal detection in hallucination-prone individuals

    PubMed Central

    Moseley, Peter; Smailes, David; Ellison, Amanda; Fernyhough, Charles

    2016-01-01

    Cognitive models have suggested that auditory hallucinations occur when internal mental events, such as inner speech or auditory verbal imagery (AVI), are misattributed to an external source. This has been supported by numerous studies indicating that individuals who experience hallucinations tend to perform in a biased manner on tasks that require them to distinguish self-generated from non-self-generated perceptions. However, these tasks have typically been of limited relevance to inner speech models of hallucinations, because they have not manipulated the AVI that participants used during the task. Here, a new paradigm was employed to investigate the interaction between imagery and perception, in which a healthy, non-clinical sample of participants were instructed to use AVI whilst completing an auditory signal detection task. It was hypothesized that AVI-usage would cause participants to perform in a biased manner, therefore falsely detecting more voices in bursts of noise. In Experiment 1, when cued to generate AVI, highly hallucination-prone participants showed a lower response bias than when performing a standard signal detection task, being more willing to report the presence of a voice in the noise. Participants not prone to hallucinations performed no differently between the two conditions. In Experiment 2, participants were not specifically instructed to use AVI, but retrospectively reported how often they engaged in AVI during the task. Highly hallucination-prone participants who retrospectively reported using imagery showed a lower response bias than did participants with lower proneness who also reported using AVI. Results are discussed in relation to prominent inner speech models of hallucinations. PMID:26435050

  12. Behavioural and neuroanatomical correlates of auditory speech analysis in primary progressive aphasias.

    PubMed

    Hardy, Chris J D; Agustus, Jennifer L; Marshall, Charles R; Clark, Camilla N; Russell, Lucy L; Bond, Rebecca L; Brotherhood, Emilie V; Thomas, David L; Crutch, Sebastian J; Rohrer, Jonathan D; Warren, Jason D

    2017-07-27

    Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters-temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)-in sequences of spoken syllables. The ability of participants to process these parameters was assessed using two-alternative, forced-choice tasks and neuroanatomical associations of task performance were assessed using voxel-based morphometry of patients' brain magnetic resonance images. Relative to healthy controls, both the nfvPPA and svPPA groups had impaired processing of phonemic spectral structure and signal predictability while the nfvPPA group additionally had impaired processing of temporal regularity in speech signals. Task performance correlated with standard disease severity and neurolinguistic measures. Across the patient cohort, performance on the temporal regularity task was associated with grey matter in the left supplementary motor area and right caudate, performance on the phoneme processing task was associated with grey matter in the left supramarginal gyrus, and performance on the prosodic predictability task was associated with grey matter in the right putamen. Our findings suggest that PPA syndromes may be underpinned by more generic deficits of auditory signal analysis, with a distributed cortico-subcortical neuraoanatomical substrate extending beyond the canonical language network. This has implications for syndrome classification and biomarker development.

  13. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    PubMed

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  14. Concurrent auditory perception difficulties in older adults with right hemisphere cerebrovascular accident

    PubMed Central

    Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat

    2014-01-01

    Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Results: Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). Conclusion: The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems. PMID:25679009

  15. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia

    PubMed Central

    Dale, Corby L.; Brown, Ethan G.; Fisher, Melissa; Herman, Alexander B.; Dowling, Anne F.; Hinkley, Leighton B.; Subramaniam, Karuna; Nagarajan, Srikantan S.; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  16. Learning and transfer of category knowledge in an indirect categorization task.

    PubMed

    Helie, Sebastien; Ashby, F Gregory

    2012-05-01

    Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.

  17. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    PubMed

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-12-20

    The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.

  18. Performance of normal adults and children on central auditory diagnostic tests and their corresponding visual analogs.

    PubMed

    Bellis, Teri James; Ross, Jody

    2011-09-01

    It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.

  19. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder

    PubMed Central

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-01

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042

  20. Two-Stage Processing of Sounds Explains Behavioral Performance Variations due to Changes in Stimulus Contrast and Selective Attention: An MEG Study

    PubMed Central

    Kauramäki, Jaakko; Jääskeläinen, Iiro P.; Hänninen, Jarno L.; Auranen, Toni; Nummenmaa, Aapo; Lampinen, Jouko; Sams, Mikko

    2012-01-01

    Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. PMID:23071654

  1. Dysfunctional information processing during an auditory event-related potential task in individuals with Internet gaming disorder.

    PubMed

    Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S

    2016-01-26

    Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD.

  2. Uncovering beat deafness: detecting rhythm disorders with synchronized finger tapping and perceptual timing tasks.

    PubMed

    Dalla Bella, Simone; Sowiński, Jakub

    2015-03-16

    A set of behavioral tasks for assessing perceptual and sensorimotor timing abilities in the general population (i.e., non-musicians) is presented here with the goal of uncovering rhythm disorders, such as beat deafness. Beat deafness is characterized by poor performance in perceiving durations in auditory rhythmic patterns or poor synchronization of movement with auditory rhythms (e.g., with musical beats). These tasks include the synchronization of finger tapping to the beat of simple and complex auditory stimuli and the detection of rhythmic irregularities (anisochrony detection task) embedded in the same stimuli. These tests, which are easy to administer, include an assessment of both perceptual and sensorimotor timing abilities under different conditions (e.g., beat rates and types of auditory material) and are based on the same auditory stimuli, ranging from a simple metronome to a complex musical excerpt. The analysis of synchronized tapping data is performed with circular statistics, which provide reliable measures of synchronization accuracy (e.g., the difference between the timing of the taps and the timing of the pacing stimuli) and consistency. Circular statistics on tapping data are particularly well-suited for detecting individual differences in the general population. Synchronized tapping and anisochrony detection are sensitive measures for identifying profiles of rhythm disorders and have been used with success to uncover cases of poor synchronization with spared perceptual timing. This systematic assessment of perceptual and sensorimotor timing can be extended to populations of patients with brain damage, neurodegenerative diseases (e.g., Parkinson's disease), and developmental disorders (e.g., Attention Deficit Hyperactivity Disorder).

  3. Cognitive mechanisms associated with auditory sensory gating

    PubMed Central

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  4. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  5. States of Awareness I: Subliminal Perception Relationship to Situational Awareness

    DTIC Science & Technology

    1993-05-01

    one experiment, the visual detection threshold was raised by simultaneous auditory stimulation involving subliminal emotional words. Similar results...an assessment was made of the effects of both subliminal and supraliminal auditory accessory stimulation (white noise) on a visual detection task... stimulation investigation. Both subliminal and supraliminal auditory stimulation were employed to evaluate possible differential effects in visual illusions

  6. Saturation of auditory short-term memory causes a plateau in the sustained anterior negativity event-related potential.

    PubMed

    Alunni-Menichini, Kristelle; Guimond, Synthia; Bermudez, Patrick; Nolden, Sophie; Lefebvre, Christine; Jolicoeur, Pierre

    2014-12-10

    The maintenance of information in auditory short-term memory (ASTM) is accompanied by a sustained anterior negativity (SAN) in the event-related potential measured during the retention interval of simple auditory memory tasks. Previous work on ASTM showed that the amplitude of the SAN increased in negativity as the number of maintained items increases. The aim of the current study was to measure the SAN and observe its behavior beyond the point of saturation of auditory short-term memory. We used atonal pure tones in sequences of 2, 4, 6, or 8t. Our results showed that the amplitude of SAN increased in negativity from 2 to 4 items and then levelled off from 4 to 8 items. Behavioral results suggested that the average span in the task was slightly below 3, which was consistent with the observed plateau in the electrophysiological results. Furthermore, the amplitude of the SAN predicted individual differences in auditory memory capacity. The results support the hypothesis that the SAN is an electrophysiological index of brain activity specifically related to the maintenance of auditory information in ASTM. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling

    PubMed Central

    Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J.

    2014-01-01

    Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic–semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory–semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model. PMID:24688478

  8. Encoding tasks dissociate the effects of divided attention on category-cued recall and category-exemplar generation.

    PubMed

    Parker, Andrew; Dagnall, Neil; Munley, Gary

    2012-01-01

    The combined effects of encoding tasks and divided attention upon category-exemplar generation and category-cued recall were examined. Participants were presented with pairs of words each comprising a category name and potential example of that category. They were then asked to indicate either (i) their liking for both of the words or (ii) if the exemplar was a member of the category. It was found that divided attention reduced performance on the category-cued recall task under both encoding conditions. However, performance on the category-exemplar generation task remained invariant across the attention manipulation following the category judgment task. This provides further evidence that the processes underlying performance on conceptual explicit and implicit memory tasks can be dissociated, and that the intentional formation of category-exemplar associations attenuates the effects of divided attention on category-exemplar generation.

  9. Tonic effects of the dopaminergic ventral midbrain on the auditory cortex of awake macaque monkeys.

    PubMed

    Huang, Ying; Mylius, Judith; Scheich, Henning; Brosch, Michael

    2016-03-01

    This study shows that ongoing electrical stimulation of the dopaminergic ventral midbrain can modify neuronal activity in the auditory cortex of awake primates for several seconds. This was reflected in a decrease of the spontaneous firing and in a bidirectional modification of the power of auditory evoked potentials. We consider that both effects are due to an increase in the dopamine tone in auditory cortex induced by the electrical stimulation. Thus, the dopaminergic ventral midbrain may contribute to the tonic activity in auditory cortex that has been proposed to be involved in associating events of auditory tasks (Brosch et al. Hear Res 271:66-73, 2011) and may modulate the signal-to-noise ratio of the responses to auditory stimuli.

  10. Long-lasting attentional influence of negative and taboo words in an auditory variant of the emotional Stroop task.

    PubMed

    Bertels, Julie; Kolinsky, Régine; Pietrons, Elise; Morais, José

    2011-02-01

    Using an auditory adaptation of the emotional and taboo Stroop tasks, the authors compared the effects of negative and taboo spoken words in mixed and blocked designs. Both types of words elicited carryover effects with mixed presentations and interference with blocked presentations, suggesting similar long-lasting attentional effects. Both were also relatively resilient to the long-lasting influence of the preceding emotional word. Hence, contrary to what has been assumed (Schmidt & Saari, 2007), negative and taboo words do not seem to differ in terms of the temporal dynamics of the interdimensional shifting, at least in the auditory modality. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  11. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    PubMed

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. White matter microstructural properties correlate with sensorimotor synchronization abilities.

    PubMed

    Blecher, Tal; Tal, Idan; Ben-Shachar, Michal

    2016-09-01

    Sensorimotor synchronization (SMS) to an external auditory rhythm is a developed ability in humans, particularly evident in dancing and singing. This ability is typically measured in the lab via a simple task of finger tapping to an auditory beat. While simplistic, there is some evidence that poor performance on this task could be related to impaired phonological and reading abilities in children. Auditory-motor synchronization is hypothesized to rely on a tight coupling between auditory and motor neural systems, but the specific pathways that mediate this coupling have not been identified yet. In this study, we test this hypothesis and examine the contribution of fronto-temporal and callosal connections to specific measures of rhythmic synchronization. Twenty participants went through SMS and diffusion magnetic resonance imaging (dMRI) measurements. We quantified the mean asynchrony between an auditory beat and participants' finger taps, as well as the time to resynchronize (TTR) with an altered meter, and examined the correlations between these behavioral measures and diffusivity in a small set of predefined pathways. We found significant correlations between asynchrony and fractional anisotropy (FA) in the left (but not right) arcuate fasciculus and in the temporal segment of the corpus callosum. On the other hand, TTR correlated with FA in the precentral segment of the callosum. To our knowledge, this is the first demonstration that relates these particular white matter tracts with performance on an auditory-motor rhythmic synchronization task. We propose that left fronto-temporal and temporal-callosal fibers are involved in prediction and constant comparison between auditory inputs and motor commands, while inter-hemispheric connections between the motor/premotor cortices contribute to successful resynchronization of motor responses with a new external rhythm, perhaps via inhibition of tapping to the previous rhythm. Our results indicate that auditory-motor synchronization skills are associated with anatomical pathways that have been previously related to phonological awareness, thus offering a possible anatomical basis for the behavioral covariance between these abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. The performance of stroke survivors in turning-while-walking while carrying out a concurrent cognitive task compared with controls.

    PubMed

    Chan, Wing-Nga; Tsang, William Wai-Nam

    2017-01-01

    Turning-while-walking is one of the commonest causes of falls in stroke survivors. It involves cognitive processing and may be challenging when performed concurrently with a cognitive task. Previous studies of dual-tasking involving turning-while-walking in stroke survivors show that the performance of physical tasks is compromised. However, the design of those studies did not address the response of stroke survivors under dual-tasking condition without specifying the task-preference and its effect on the performance of the cognitive task. First, to compare the performance of single-tasking and dual-tasking in stroke survivors. Second, to compare the performance of stroke survivors with non-stroke controls. Fifty-nine stroke survivors and 45 controls were assessed with an auditory Stroop test, a turning-while-walking test, and a combination of the two single tasks. The outcome of the cognitive task was measured by the reaction time and accuracy of the task. The physical task was evaluated by measuring the turning duration, number of steps to turn, and time to complete the turning-while-walking test. Stroke survivors showed a significantly reduced accuracy in the auditory Stroop test when dual-tasking, but there was no change in the reaction time. Their performance in the turning-while-walking task was similar under both single-tasking and dual-tasking condition. Additionally, stroke survivors demonstrated a significantly longer reaction time and lower accuracy than the controls both when single-tasking and dual-tasking. They took longer to turn, with more steps, and needed more time to complete the turning-while-walking task in both tasking conditions. The results show that stroke survivors with high mobility function performed the auditory Stroop test less accurately while preserving simultaneous turning-while-walking performance. They also demonstrated poorer performance in both single-tasking and dual-tasking as compared with controls.

  14. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  15. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  16. EEG alpha spindles and prolonged brake reaction times during auditory distraction in an on-road driving study.

    PubMed

    Sonnleitner, Andreas; Treder, Matthias Sebastian; Simon, Michael; Willmann, Sven; Ewald, Arne; Buchner, Axel; Schrauf, Michael

    2014-01-01

    Driver distraction is responsible for a substantial number of traffic accidents. This paper describes the impact of an auditory secondary task on drivers' mental states during a primary driving task. N=20 participants performed the test procedure in a car following task with repeated forced braking on a non-public test track. Performance measures (provoked reaction time to brake lights) and brain activity (EEG alpha spindles) were analyzed to describe distracted drivers. Further, a classification approach was used to investigate whether alpha spindles can predict drivers' mental states. Results show that reaction times and alpha spindle rate increased with time-on-task. Moreover, brake reaction times and alpha spindle rate were significantly higher while driving with auditory secondary task opposed to driving only. In single-trial classification, a combination of spindle parameters yielded a median classification error of about 8% in discriminating the distracted from the alert driving. Reduced driving performance (i.e., prolonged brake reaction times) during increased cognitive load is assumed to be indicated by EEG alpha spindles, enabling the quantification of driver distraction in experiments on public roads without verbally assessing the drivers' mental states. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Perceptual Plasticity for Auditory Object Recognition

    PubMed Central

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524

  18. Developmental hearing loss impedes auditory task learning and performance in gerbils

    PubMed Central

    von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N.; Sanes, Dan H.

    2016-01-01

    The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. PMID:27746215

  19. Task-dependent modulation of regions in the left temporal cortex during auditory sentence comprehension.

    PubMed

    Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping

    2015-01-01

    Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification.

    PubMed

    Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor

    2014-08-01

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.

  1. Effect of Water Immersion on Dual-task Performance: Implications for Aquatic Therapy.

    PubMed

    Schaefer, Sydney Y; Louder, Talin J; Foster, Shayla; Bressel, Eadric

    2016-09-01

    Much is known about cardiovascular and biomechanical responses to exercise during water immersion, yet an understanding of the higher-order neural responses to water immersion is unclear. The purpose of this study was to compare cognitive and motor performance between land and water environments using a dual-task paradigm, which served as an indirect measure of cortical processing. A quasi-experimental crossover research design is used. Twenty-two healthy participants (age = 24.3 ± 5.24 years) and a single-case patient (age = 73) with mild cognitive impairment performed a cognitive (auditory vigilance) and motor (standing balance) task separately (single-task condition) and simultaneously (dual-task condition) on land and in chest-deep water. Listening errors from the auditory vigilance task and centre of pressure (CoP) area for the balance task measured cognitive and motor performance, respectively. Listening errors for the single-task and dual-task conditions were 42% and 45% lower for the water than land condition, respectively (effect size [ES] = 0.38 and 0.55). CoP area for the single-task and dual-task conditions, however, were 115% and 164% lower on land than in water, respectively, and were lower (≈8-33%) when balancing concurrently with the auditory vigilance task compared with balancing alone, regardless of environment (ES = 0.23-1.7). This trend was consistent for the single-case patient. Participants tended to make fewer 'cognitive' errors while immersed chest-deep in water than on land. These same participants also tended to display less postural sway under dual-task conditions, but more in water than on land. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Towards an understanding of the mechanisms of weak central coherence effects: experiments in visual configural learning and auditory perception.

    PubMed

    Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma

    2003-02-28

    The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects.

  3. Validating a visual version of the metronome response task.

    PubMed

    Laflamme, Patrick; Seli, Paul; Smilek, Daniel

    2018-02-12

    The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.

  4. Effects of smoking marijuana on focal attention and brain blood flow.

    PubMed

    O'Leary, Daniel S; Block, Robert I; Koeppel, Julie A; Schultz, Susan K; Magnotta, Vincent A; Ponto, Laura Boles; Watkins, G Leonard; Hichwa, Richard D

    2007-04-01

    Using an attention task to control cognitive state, we previously found that smoking marijuana changes regional cerebral blood flow (rCBF). The present study measured rCBF during tasks requiring attention to left and right ears in different conditions. Twelve occasional marijuana users (mean age 23.5 years) were imaged with PET using [15O]water after smoking marijuana or placebo cigarettes as they performed a reaction time (RT) baseline task, and a dichotic listening task with attend-right- and attend-left-ear instructions. Smoking marijuana, but not placebo, resulted in increased normalized rCBF in orbital frontal cortex, anterior cingulate, temporal pole, insula, and cerebellum. RCBF was reduced in visual and auditory cortices. These changes occurred in all three tasks and replicated our earlier studies. They appear to reflect the direct effects of marijuana on the brain. Smoking marijuana lowered rCBF in auditory cortices compared to placebo but did not alter the normal pattern of attention-related rCBF asymmetry (i.e., greater rCBF in the temporal lobe contralateral to the direction of attention) that was also observed after placebo. These data indicate that marijuana has dramatic direct effects on rCBF, but causes relatively little change in the normal pattern of task-related rCBF on this auditory focused attention task. Copyright 2007 John Wiley & Sons, Ltd.

  5. Designing informative warning signals: Effects of indicator type, modality, and task demand on recognition speed and accuracy

    PubMed Central

    Stevens, Catherine J.; Brennan, David; Petocz, Agnes; Howell, Clare

    2009-01-01

    An experiment investigated the assumption that natural indicators which exploit existing learned associations between a signal and an event make more effective warnings than previously unlearned symbolic indicators. Signal modality (visual, auditory) and task demand (low, high) were also manipulated. Warning effectiveness was indexed by accuracy and reaction time (RT) recorded during training and dual task test phases. Thirty-six participants were trained to recognize 4 natural and 4 symbolic indicators, either visual or auditory, paired with critical incidents from an aviation context. As hypothesized, accuracy was greater and RT was faster in response to natural indicators during the training phase. This pattern of responding was upheld in test phase conditions with respect to accuracy but observed in RT only in test phase conditions involving high demand and the auditory modality. Using the experiment as a specific example, we argue for the importance of considering the cognitive contribution of the user (viz., prior learned associations) in the warning design process. Drawing on semiotics and cognitive psychology, we highlight the indexical nature of so-called auditory icons or natural indicators and argue that the cogniser is an indispensable element in the tripartite nature of signification. PMID:20523852

  6. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  7. Neural effects of cognitive control load on auditory selective attention

    PubMed Central

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali

    2014-01-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314

  8. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types

    PubMed Central

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286

  9. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.

    PubMed

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.

  10. The Complex Pre-Execution Stage of Auditory Cognitive Control: ERPs Evidence from Stroop Tasks

    PubMed Central

    Yu, Bo; Wang, Xunda; Ma, Lin; Li, Liang; Li, Haifeng

    2015-01-01

    Cognitive control has been extensively studied from Event-Related Potential (ERP) point of view in visual modality using Stroop paradigms. Little work has been done in auditory Stroop paradigms, and inconsistent conclusions have been reported, especially on the conflict detection stage of cognitive control. This study investigated the early ERP components in an auditory Stroop paradigm, during which participants were asked to identify the volume of spoken words and ignore the word meanings. A series of significant ERP components were revealed that distinguished incongruent and congruent trials: two declined negative polarity waves (the N1 and the N2) and three declined positive polarity wave (the P1, the P2 and the P3) over the fronto-central area for the incongruent trials. These early ERP components imply that both a perceptual stage and an identification stage exist in the auditory Stroop effect. A 3-stage cognitive control model was thus proposed for a more detailed description of the human cognitive control mechanism in the auditory Stroop tasks. PMID:26368570

  11. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  12. Early Stages of Melody Processing: Stimulus-Sequence and Task-Dependent Neuronal Activity in Monkey Auditory Cortical Fields A1 and R

    PubMed Central

    Yin, Pingbo; Mishkin, Mortimer; Sutter, Mitchell; Fritz, Jonathan B.

    2008-01-01

    To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S−) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence–sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task. PMID:18842950

  13. EEG theta power and coherence to octave illusion in first-episode paranoid schizophrenia with auditory hallucinations.

    PubMed

    Zheng, Leilei; Chai, Hao; Yu, Shaohua; Xu, You; Chen, Wanzhen; Wang, Wei

    2015-01-01

    The exact mechanism behind auditory hallucinations in schizophrenia remains unknown. A corollary discharge dysfunction hypothesis has been put forward, but it requires further confirmation. Electroencephalography (EEG) of the Deutsch octave illusion might offer more insight, by demonstrating an abnormal cerebral activation similar to that under auditory hallucinations in schizophrenic patients. We invited 23 first-episode schizophrenic patients with auditory hallucinations and 23 healthy participants to listen to silence and two sound sequences, which consisted of alternating 400- and 800-Hz tones. EEG spectral power and coherence values of different frequency bands, including theta rhythm (3.5-7.5 Hz), were computed using 32 scalp electrodes. Task-related spectral power changes and task-related coherence differences were also calculated. Clinical characteristics of patients were rated using the Positive and Negative Syndrome Scale. After both sequences of octave illusion, the task-related theta power change values of frontal and temporal areas were significantly lower, and the task-related theta coherence difference values of intrahemispheric frontal-temporal areas were significantly higher in schizophrenic patients than in healthy participants. Moreover, the task-related power change values in both hemispheres were negatively correlated and the task-related coherence difference values in the right hemisphere were positively correlated with the hallucination score in schizophrenic patients. We only tested the Deutsch octave illusion in primary schizophrenic patients with acute first episode. Further studies might adopt other illusions or employ other forms of schizophrenia. Our results showed a lower activation but higher connection within frontal and temporal areas in schizophrenic patients under octave illusion. This suggests an oversynchronized but weak frontal area to exert an action to the ipsilateral temporal area, which supports the corollary discharge dysfunction hypothesis. © 2014 S. Karger AG, Basel.

  14. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  15. Perception of patterns of musical beat distribution in phonological developmental dyslexia: significant longitudinal relations with word reading and reading comprehension.

    PubMed

    Goswami, Usha; Huss, Martina; Mead, Natasha; Fosker, Tim; Verney, John P

    2013-05-01

    In a recent study, we reported that the accurate perception of beat structure in music ('perception of musical meter') accounted for over 40% of the variance in single word reading in children with and without dyslexia (Huss et al., 2011). Performance in the musical task was most strongly associated with the auditory processing of rise time, even though beat structure was varied by manipulating the duration of the musical notes. Here we administered the same musical task a year later to 88 children with and without dyslexia, and used new auditory processing measures to provide a more comprehensive picture of the auditory correlates of the beat structure task. We also measured reading comprehension and nonword reading in addition to single word reading. One year later, the children with dyslexia performed more poorly in the musical task than younger children reading at the same level, indicating a severe perceptual deficit for musical beat patterns. They now also had significantly poorer perception of sound rise time than younger children. Longitudinal analyses showed that the musical beat structure task was a significant longitudinal predictor of development in reading, accounting for over half of the variance in reading comprehension along with a linguistic measure of phonological awareness. The non-linguistic musical beat structure task is an important independent longitudinal and concurrent predictor of variance in reading attainment by children. The different longitudinal versus concurrent associations between musical beat perception and auditory processing suggest that individual differences in the perception of rhythmic timing are an important shared neural basis for individual differences in children in linguistic and musical processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    ERIC Educational Resources Information Center

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  17. Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.

    PubMed

    Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas

    2015-12-09

    Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.

  18. Developmental dyslexia: exploring how much phonological and visual attention span disorders are linked to simultaneous auditory processing deficits.

    PubMed

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-07-01

    The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were assessed in the dyslexic children. We presented the dyslexic children with a phonological short-term memory task and a phonemic awareness task to quantify their phonological skills. Visual attention spans correlated positively with individual scores obtained on the dichotic listening task while phonological skills did not correlate with either dichotic scores or visual attention span measures. Moreover, all the dyslexic children with a dichotic listening deficit showed a simultaneous visual processing deficit, and a substantial number of dyslexic children exhibited phonological processing deficits whether or not they exhibited low dichotic listening scores. These findings suggest that processing simultaneous auditory stimuli may be impaired in dyslexic children regardless of phonological processing difficulties and be linked to similar problems in the visual modality.

  19. Speech processing and production in two-year-old children acquiring isiXhosa: A tale of two children

    PubMed Central

    Rossouw, Kate; Fish, Laura; Jansen, Charne; Manley, Natalie; Powell, Michelle; Rosen, Loren

    2016-01-01

    We investigated the speech processing and production of 2-year-old children acquiring isiXhosa in South Africa. Two children (2 years, 5 months; 2 years, 8 months) are presented as single cases. Speech input processing, stored phonological knowledge and speech output are described, based on data from auditory discrimination, naming, and repetition tasks. Both children were approximating adult levels of accuracy in their speech output, although naming was constrained by vocabulary. Performance across tasks was variable: One child showed a relative strength with repetition, and experienced most difficulties with auditory discrimination. The other performed equally well in naming and repetition, and obtained 100% for her auditory task. There is limited data regarding typical development of isiXhosa, and the focus has mainly been on speech production. This exploratory study describes typical development of isiXhosa using a variety of tasks understood within a psycholinguistic framework. We describe some ways in which speech and language therapists can devise and carry out assessment with children in situations where few formal assessments exist, and also detail the challenges of such work. PMID:27245131

  20. Is conflict monitoring supramodal? Spatiotemporal dynamics of cognitive control processes in an auditory Stroop task

    PubMed Central

    Donohue, Sarah E.; Liotti, Mario; Perez, Rick; Woldorff, Marty G.

    2011-01-01

    The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal. PMID:21964643

  1. The Relationship Between Speech Production and Speech Perception Deficits in Parkinson's Disease.

    PubMed

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-10-01

    This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified through a standardized speech intelligibility assessment, acoustic analysis, and speech intensity measurements. Second, an overall estimation task and an intensity estimation task were addressed to evaluate overall speech perception and speech intensity perception, respectively. Finally, correlation analysis was performed between the speech characteristics of the overall estimation task and the corresponding acoustic analysis. The interaction between speech production and speech intensity perception was investigated by an intensity imitation task. Acoustic analysis and speech intensity measurements demonstrated significant differences in speech production between patients with PD and the HCs. A different pattern in the auditory perception of speech and speech intensity was found in the PD group. Auditory perceptual deficits may influence speech production in patients with PD. The present results suggest a disturbed auditory perception related to an automatic monitoring deficit in PD.

  2. The role of auditory cortex in retention of rhythmic patterns as studied in patients with temporal lobe removals including Heschl's gyrus.

    PubMed

    Penhune, V B; Zatorre, R J; Feindel, W H

    1999-03-01

    This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.

  3. Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease.

    PubMed

    Benoit, Charles-Etienne; Dalla Bella, Simone; Farrugia, Nicolas; Obrig, Hellmuth; Mainka, Stefan; Kotz, Sonja A

    2014-01-01

    It is well established that auditory cueing improves gait in patients with idiopathic Parkinson's disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here, we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a 4-week music training program with rhythmic auditory cueing. Long-term effects were assessed 1 month after the end of the training. Perceptual and motor timing was evaluated with a battery for the assessment of auditory sensorimotor and timing abilities and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients' performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts). The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.

  4. The neural basis of visual dominance in the context of audio-visual object processing.

    PubMed

    Schmid, Carmen; Büchel, Christian; Rose, Michael

    2011-03-01

    Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. When music is salty: The crossmodal associations between sound and taste

    PubMed Central

    Guetta, Rachel; Loui, Psyche

    2017-01-01

    Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population. PMID:28355227

  6. Local inhibition modulates learning-dependent song encoding in the songbird auditory cortex

    PubMed Central

    Thompson, Jason V.; Jeanne, James M.

    2013-01-01

    Changes in inhibition during development are well documented, but the role of inhibition in adult learning-related plasticity is not understood. In songbirds, vocal recognition learning alters the neural representation of songs across the auditory forebrain, including the caudomedial nidopallium (NCM), a region analogous to mammalian secondary auditory cortices. Here, we block local inhibition with the iontophoretic application of gabazine, while simultaneously measuring song-evoked spiking activity in NCM of European starlings trained to recognize sets of conspecific songs. We find that local inhibition differentially suppresses the responses to learned and unfamiliar songs and enhances spike-rate differences between learned categories of songs. These learning-dependent response patterns emerge, in part, through inhibitory modulation of selectivity for song components and the masking of responses to specific acoustic features without altering spectrotemporal tuning. The results describe a novel form of inhibitory modulation of the encoding of learned categories and demonstrate that inhibition plays a central role in shaping the responses of neurons to learned, natural signals. PMID:23155175

  7. Transfer Effects to a Multimodal Dual-Task after Working Memory Training and Associated Neural Correlates in Older Adults - A Pilot Study.

    PubMed

    Heinzel, Stephan; Rimpel, Jérôme; Stelzel, Christine; Rapp, Michael A

    2017-01-01

    Working memory (WM) performance declines with age. However, several studies have shown that WM training may lead to performance increases not only in the trained task, but also in untrained cognitive transfer tasks. It has been suggested that transfer effects occur if training task and transfer task share specific processing components that are supposedly processed in the same brain areas. In the current study, we investigated whether single-task WM training and training-related alterations in neural activity might support performance in a dual-task setting, thus assessing transfer effects to higher-order control processes in the context of dual-task coordination. A sample of older adults (age 60-72) was assigned to either a training or control group. The training group participated in 12 sessions of an adaptive n-back training. At pre and post-measurement, a multimodal dual-task was performed in all participants to assess transfer effects. This task consisted of two simultaneous delayed match to sample WM tasks using two different stimulus modalities (visual and auditory) that were performed either in isolation (single-task) or in conjunction (dual-task). A subgroup also participated in functional magnetic resonance imaging (fMRI) during the performance of the n-back task before and after training. While no transfer to single-task performance was found, dual-task costs in both the visual modality ( p < 0.05) and the auditory modality ( p < 0.05) decreased at post-measurement in the training but not in the control group. In the fMRI subgroup of the training participants, neural activity changes in left dorsolateral prefrontal cortex (DLPFC) during one-back predicted post-training auditory dual-task costs, while neural activity changes in right DLPFC during three-back predicted visual dual-task costs. Results might indicate an improvement in central executive processing that could facilitate both WM and dual-task coordination.

  8. Transfer Effects to a Multimodal Dual-Task after Working Memory Training and Associated Neural Correlates in Older Adults – A Pilot Study

    PubMed Central

    Heinzel, Stephan; Rimpel, Jérôme; Stelzel, Christine; Rapp, Michael A.

    2017-01-01

    Working memory (WM) performance declines with age. However, several studies have shown that WM training may lead to performance increases not only in the trained task, but also in untrained cognitive transfer tasks. It has been suggested that transfer effects occur if training task and transfer task share specific processing components that are supposedly processed in the same brain areas. In the current study, we investigated whether single-task WM training and training-related alterations in neural activity might support performance in a dual-task setting, thus assessing transfer effects to higher-order control processes in the context of dual-task coordination. A sample of older adults (age 60–72) was assigned to either a training or control group. The training group participated in 12 sessions of an adaptive n-back training. At pre and post-measurement, a multimodal dual-task was performed in all participants to assess transfer effects. This task consisted of two simultaneous delayed match to sample WM tasks using two different stimulus modalities (visual and auditory) that were performed either in isolation (single-task) or in conjunction (dual-task). A subgroup also participated in functional magnetic resonance imaging (fMRI) during the performance of the n-back task before and after training. While no transfer to single-task performance was found, dual-task costs in both the visual modality (p < 0.05) and the auditory modality (p < 0.05) decreased at post-measurement in the training but not in the control group. In the fMRI subgroup of the training participants, neural activity changes in left dorsolateral prefrontal cortex (DLPFC) during one-back predicted post-training auditory dual-task costs, while neural activity changes in right DLPFC during three-back predicted visual dual-task costs. Results might indicate an improvement in central executive processing that could facilitate both WM and dual-task coordination. PMID:28286477

  9. Task alters category representations in prefrontal but not high-level visual cortex.

    PubMed

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. A Quality Improvement Study on Avoidable Stressors and Countermeasures Affecting Surgical Motor Performance and Learning

    PubMed Central

    Conrad, Claudius; Konuk, Yusuf; Werner, Paul D.; Cao, Caroline G.; Warshaw, Andrew L.; Rattner, David W.; Stangenberg, Lars; Ott, Harald C.; Jones, Daniel B.; Miller, Diane L; Gee, Denise W.

    2012-01-01

    OBJECTIVE To explore how the two most important components of surgical performance - speed and accuracy - are influenced by different forms of stress and what the impact of music on these factors is. SUMMARY BACKGROUND DATA Based on a recently published pilot study on surgical experts, we designed an experiment examining the effects of auditory stress, mental stress, and music on surgical performance and learning, and then correlated the data psychometric measures to the role of music in a novice surgeon’s life. METHODS 31 surgeons were recruited for a crossover study. Surgeons were randomized to four simple standardized tasks to be performed on the Surgical SIM VR laparoscopic simulator, allowing exact tracking of speed and accuracy. Tasks were performed under a variety of conditions, including silence, dichotic music (auditory stress), defined classical music (auditory relaxation), and mental loading (mental arithmetic tasks). Tasks were performed twice to test for memory consolidation and to accommodate for baseline variability. Performance was correlated to the Brief Musical Experience Questionnaire (MEQ). RESULTS Mental loading influences performance with respect to accuracy, speed, and recall more negatively than does auditory stress. Defined classical music might lead to minimally worse performance initially, but leads to significantly improved memory consolidation. Furthermore, psychologic testing of the volunteers suggests that surgeons with greater musical commitment, measured by the MEQ, perform worse under the mental loading condition. CONCLUSION Mental distraction and auditory stress negatively affect specific components of surgical learning and performance. If used appropriately, classical music may positively affect surgical memory consolidation. It also may be possible to predict surgeons’ performance and learning under stress through psychological tests on the role of music in a surgeon’s life. Further investigation is necessary to determine the cognitive processes behind these correlations. PMID:22584632

  12. Brain activity associated with selective attention, divided attention and distraction.

    PubMed

    Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo

    2017-06-01

    Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.

    PubMed

    Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C

    2016-01-01

    A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.

  14. Effect of subliminal visual material on an auditory signal detection task.

    PubMed

    Moroney, E; Bross, M

    1984-02-01

    An experiment assessed the effect of subliminally embedded, visual material on an auditory detection task. 22 women and 19 men were presented tachistoscopically with words designated as "emotional" or "neutral" on the basis of prior GSRs and a Word Rating List under four conditions: (a) Unembedded Neutral, (b) Embedded Neutral, (c) Unembedded Emotional, and (d) Embedded Emotional. On each trial subjects made forced choices concerning the presence or absence of an auditory tone (1000 Hz) at threshold level; hits and false alarm rates were used to compute non-parametric indices for sensitivity (A') and response bias (B"). While over-all analyses of variance yielded no significant differences, further examination of the data suggests the presence of subliminally "receptive" and "non-receptive" subpopulations.

  15. Psychophysiological changes following auditory subliminal suggestions for activation and deactivation.

    PubMed

    Borgeat, F; Goulet, J

    1983-06-01

    This study was to measure eventual psychophysiological changes resulting from auditory subliminal activation or deactivation suggestions. 18 subjects were alternately exposed to a control situation and to 25-dB activating and deactivating suggestions masked by a 40-dB white noise. Physiological measures (EMG, heart rate, skin-conductance levels and responses, and skin temperature) were recorded while subjects listened passively to the suggestions, during a stressing task that followed and after that task. Multivariate analysis of variance showed a significant effect of the activation subliminal suggestions during and following the stressing task. This result is discussed as indicating effects of consciously unrecognized perceptions on psychophysiological responses.

  16. Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex

    PubMed Central

    Vaden, Ryan J.; Visscher, Kristina M.

    2015-01-01

    Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806

  17. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    PubMed

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  19. Throwing out the rules: anticipatory alpha-band oscillatory attention mechanisms during task-set reconfigurations.

    PubMed

    Foxe, John J; Murphy, Jeremy W; De Sanctis, Pierfilippo

    2014-06-01

    We assessed the role of alpha-band oscillatory activity during a task-switching design that required participants to switch between an auditory and a visual task, while task-relevant audiovisual inputs were simultaneously presented. Instructional cues informed participants which task to perform on a given trial and we assessed alpha-band power in the short 1.35-s period intervening between the cue and the task-imperative stimuli, on the premise that attentional biasing mechanisms would be deployed to resolve competition between the auditory and visual inputs. Prior work had shown that alpha-band activity was differentially deployed depending on the modality of the cued task. Here, we asked whether this activity would, in turn, be differentially deployed depending on whether participants had just made a switch of task or were being asked to simply repeat the task. It is well established that performance speed and accuracy are poorer on switch than on repeat trials. Here, however, the use of instructional cues completely mitigated these classic switch-costs. Measures of alpha-band synchronisation and desynchronisation showed that there was indeed greater and earlier differential deployment of alpha-band activity on switch vs. repeat trials. Contrary to our hypothesis, this differential effect was entirely due to changes in the amount of desynchronisation observed during switch and repeat trials of the visual task, with more desynchronisation over both posterior and frontal scalp regions during switch-visual trials. These data imply that particularly vigorous, and essentially fully effective, anticipatory biasing mechanisms resolved the competition between competing auditory and visual inputs when a rapid switch of task was required. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne

    2016-12-01

    It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    ERIC Educational Resources Information Center

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  2. Auditory fatigue : influence of mental factors.

    DOT National Transportation Integrated Search

    1965-01-01

    Conflicting reports regarding the influence of mental tasks on auditory fatigue have recently appeared in the literature. In the present study, 10 male subjects were exposed to 4000 cps fatigue toe at 40 dB SL for 3 min under conditions of mental ari...

  3. Audiogram Comparison of Workers from Five Professional Categories

    PubMed Central

    Mathias Duarte, Alexandre Scalli; Guimarães, Alexandre Caixeta; de Carvalho, Guilherme Machado; Pinheiro, Laíza Araújo Mohana; Yen Ng, Ronny Tah; Sampaio, Marcelo Hamilton; da Costa, Everardo Andrade; Gusmão, Reinaldo Jordão

    2015-01-01

    Introduction. Noise is a major cause of health disorders in workers and has unique importance in the auditory analysis of people exposed to it. The purpose of this study is to evaluate the arithmetic mean of the auditory thresholds at frequencies of 3, 4, and 6 kHz of workers from five professional categories exposed to occupational noise. Methods. We propose a retrospective cross-sectional cohort study to analyze 2.140 audiograms from seven companies having five sectors of activity: one footwear company, one beverage company, two ceramics companies, two metallurgical companies, and two transport companies. Results. When we compared two categories, we noticed a significant difference only for cargo carriers in comparison to the remaining categories. In all activity sectors, the left ear presented the worst values, except for the footwear professionals (P > 0.05). We observed an association between the noise exposure time and the reduction of audiometric values for both ears. Significant differences existed for cargo carriers in relation to other groups. This evidence may be attributed to different forms of exposure. A slow and progressive deterioration appeared as the exposure time increased. PMID:25705651

  4. Before the N400: effects of lexical-semantic violations in visual cortex.

    PubMed

    Dikker, Suzanne; Pylkkanen, Liina

    2011-07-01

    There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ∼100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Basic auditory processing and sensitivity to prosodic structure in children with specific language impairments: a new look at a perceptual hypothesis

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286

  6. A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.

    PubMed

    von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H

    2016-10-26

    The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. Copyright © 2016 the authors 0270-6474/16/3611097-10$15.00/0.

  7. A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance

    PubMed Central

    Buran, Bradley N.; Sen, Kamal; Semple, Malcolm N.; Sanes, Dan H.

    2016-01-01

    The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. PMID:27798189

  8. The Role of Task Repetition in Learning Word-Stress Patterns through Auditory Priming Tasks

    ERIC Educational Resources Information Center

    Jung, YeonJoo; Kim, YouJin; Murphy, John

    2017-01-01

    This study focused on an instructional component often neglected when teaching the pronunciation of English as either a second, foreign, or international language--namely, the suprasegmental feature of lexical stress. Extending previous research on collaborative priming tasks and task repetition, the study investigated the impact of task and…

  9. Picture naming in typically developing and language-impaired children: the role of sustained attention.

    PubMed

    Jongman, Suzanne R; Roelofs, Ardi; Scheper, Annette R; Meyer, Antje S

    2017-05-01

    Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality. To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills. Groups of 7-9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs). Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups. These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups. © 2016 Royal College of Speech and Language Therapists.

  10. Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke

    PubMed Central

    2011-01-01

    Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561

  11. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  12. The effects of auditory stimulation on the arithmetic performance of children with ADHD and nondisabled children.

    PubMed

    Abikoff, H; Courtney, M E; Szeibel, P J; Koplewicz, H S

    1996-05-01

    This study evaluated the impact of extra-task stimulation on the academic task performance of children with attention-deficit/hyperactivity disorder (ADHD). Twenty boys with ADHD and 20 nondisabled boys worked on an arithmetic task during high stimulation (music), low stimulation (speech), and no stimulation (silence). The music "distractors" were individualized for each child, and the arithmetic problems were at each child's ability level. A significant Group x Condition interaction was found for number of correct answers. Specifically, the nondisabled youngsters performed similarly under all three auditory conditions. In contrast, the children with ADHD did significantly better under the music condition than speech or silence conditions. However, a significant Group x Order interaction indicated that arithmetic performance was enhanced only for those children with ADHD who received music as the first condition. The facilitative effects of salient auditory stimulation on the arithmetic performance of the children with ADHD provide some support for the underarousal/optimal stimulation theory of ADHD.

  13. Impact of auditory selective attention on verbal short-term memory and vocabulary development.

    PubMed

    Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial

    2009-05-01

    This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing of item and serial order information because recent studies have shown this distinction to be critical when exploring relations between STM and lexical development. Multiple regression and variance partitioning analyses highlighted two variables as determinants of vocabulary development: (a) a serial order processing variable shared by STM order recall and a selective attention task for sequence information and (b) an attentional variable shared by selective attention measures targeting item or sequence information. The current study highlights the need for integrative STM models, accounting for conjoined influences of attentional capacities and serial order processing capacities on STM performance and the establishment of the lexical language network.

  14. Cardiac autonomic responses induced by mental tasks and the influence of musical auditory stimulation.

    PubMed

    Barbosa, Juliana Cristina; Guida, Heraldo L; Fontes, Anne M G; Antonio, Ana M S; de Abreu, Luiz Carlos; Barnabé, Viviani; Marcomini, Renata S; Vanderlei, Luiz Carlos M; da Silva, Meire L; Valenti, Vitor E

    2014-08-01

    We investigated the acute effects of musical auditory stimulation on cardiac autonomic responses to a mental task in 28 healthy men (18-22 years old). In the control protocol (no music), the volunteers remained at seated rest for 10 min and the test was applied for five minutes. After the end of test the subjects remained seated for five more minutes. In the music protocol, the volunteers remained at seated rest for 10 min, then were exposed to music for 10 min; the test was then applied over five minutes, and the subjects remained seated for five more minutes after the test. In the control and music protocols the time domain and frequency domain indices of heart rate variability remained unchanged before, during and after the test. We found that musical auditory stimulation with baroque music did not influence cardiac autonomic responses to the mental task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Language-Specific Attention Treatment for Aphasia: Description and Preliminary Findings.

    PubMed

    Peach, Richard K; Nathan, Meghana R; Beck, Katherine M

    2017-02-01

    The need for a specific, language-based treatment approach to aphasic impairments associated with attentional deficits is well documented. We describe language-specific attention treatment, a specific skill-based approach for aphasia that exploits increasingly complex linguistic tasks that focus attention. The program consists of eight tasks, some with multiple phases, to assess and treat lexical and sentence processing. Validation results demonstrate that these tasks load on six attentional domains: (1) executive attention; (2) attentional switching; (3) visual selective attention/processing speed; (4) sustained attention; (5) auditory-verbal working memory; and (6) auditory processing speed. The program demonstrates excellent inter- and intrarater reliability and adequate test-retest reliability. Two of four people with aphasia exposed to this program demonstrated good language recovery whereas three of the four participants showed improvements in auditory-verbal working memory. The results provide support for this treatment program in patients with aphasia having no greater than a moderate degree of attentional impairment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. A pilot study of working memory and academic achievement in college students with ADHD.

    PubMed

    Gropper, Rachel J; Tannock, Rosemary

    2009-05-01

    To investigate working memory (WM), academic achievement, and their relationship in university students with attention-deficit/hyperactivity disorder (ADHD). Participants were university students with previously confirmed diagnoses of ADHD (n = 16) and normal control (NC) students (n = 30). Participants completed 3 auditory-verbal WM measures, 2 visual-spatial WM measures, and 1 control executive function task. Also, they self-reported grade point averages (GPAs) based on university courses. The ADHD group displayed significant weaknesses on auditory-verbal WM tasks and 1 visual-spatial task. They also showed a nonsignificant trend for lower GPAs. Within the entire sample, there was a significant relationship between GPA and auditory-verbal WM. WM impairments are evident in a subgroup of the ADHD population attending university. WM abilities are linked with, and thus may compromise, academic attainment. Parents and physicians are advised to counsel university-bound students with ADHD to contact the university accessibility services to provide them with academic guidance.

  17. Transcranial alternating current stimulation modulates auditory temporal resolution in elderly people.

    PubMed

    Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried

    2018-05-13

    Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Systematic Review of Nontumor Pediatric Auditory Brainstem Implant Outcomes.

    PubMed

    Noij, Kimberley S; Kozin, Elliott D; Sethi, Rosh; Shah, Parth V; Kaplan, Alyson B; Herrmann, Barbara; Remenschneider, Aaron; Lee, Daniel J

    2015-11-01

    The auditory brainstem implant (ABI) was initially developed for patients with deafness as a result of neurofibromatosis type 2. ABI indications have recently extended to children with congenital deafness who are not cochlear implant candidates. Few multi-institutional outcome data exist. Herein, we aim to provide a systematic review of outcomes following implantation of the ABI in pediatric patients with nontumor diagnosis, with a focus on audiometric outcomes. PubMed, Embase, and Cochrane. A systematic review of literature was performed using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) recommendations. Variables assessed included age at implantation, diagnosis, medical history, cochlear implant history, radiographic findings, ABI device implanted, surgical approach, complications, side effects, and auditory outcomes. The initial search identified 304 articles; 21 met inclusion criteria for a total of 162 children. The majority of these patients had cochlear nerve aplasia (63.6%, 103 of 162). Cerebrospinal fluid leak occurred in up to 8.5% of cases. Audiometric outcomes improved over time. After 5 years, almost 50% of patients reached Categories of Auditory Performance scores >4; however, patients with nonauditory disabilities did not demonstrate a similar increase in scores. ABI surgery is a reasonable option for the habilitation of deaf children who are not cochlear implant candidates. Although improvement in Categories of Auditory Performance scores was seen across studies, pediatric ABI users with nonauditory disabilities have inferior audiometric outcomes. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  19. Visual selective attention in amnestic mild cognitive impairment.

    PubMed

    McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E

    2014-11-01

    Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Developmental hearing loss impedes auditory task learning and performance in gerbils.

    PubMed

    von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N; Sanes, Dan H

    2017-04-01

    The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Neuronal effects of nicotine during auditory selective attention.

    PubMed

    Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R

    2015-06-01

    Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.

  2. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  3. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    PubMed

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Spoken language skills and educational placement in Finnish children with cochlear implants.

    PubMed

    Lonka, Eila; Hasan, Marja; Komulainen, Erkki

    2011-01-01

    This study reports the demographics, and the auditory and spoken language development as well as educational settings, for a total of 164 Finnish children with cochlear implants. Two questionnaires were employed: the first, concerning day care and educational placement, was filled in by professionals for rehabilitation guidance, and the second, evaluating language development (categories of auditory performance, spoken language skills, and main mode of communication), by speech and language therapists in audiology departments. Nearly half of the children were enrolled in normal kindergartens and 43% of school-aged children in mainstream schools. Categories of auditory performance were observed to grow in relation to age at cochlear implantation (p < 0.001) as well as in relation to proportional hearing age (p < 0.001). The composite scores for language development moved to more diversified ones in relation to increasing age at cochlear implantation and proportional hearing age (p < 0.001). Children without additional disorders outperformed those with additional disorders. The results indicate that the most favorable age for cochlear implantation could be earlier than 2. Compared to other children, spoken language evaluation scores of those with additional disabilities were significantly lower; however, these children showed gradual improvements in their auditory perception and language scores. Copyright © 2011 S. Karger AG, Basel.

  5. Stress improves selective attention towards emotionally neutral left ear stimuli.

    PubMed

    Hoskin, Robert; Hunter, M D; Woodruff, P W R

    2014-09-01

    Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Lateralized effects of orthographical irregularity and auditory memory load on the kinematics of transcription typewriting.

    PubMed

    Bloemsaat, Gijs; Van Galen, Gerard P; Meulenbroek, Ruud G J

    2003-05-01

    This study investigated the combined effects of orthographical irregularity and auditory memory load on the kinematics of finger movements in a transcription-typewriting task. Eight right-handed touch-typists were asked to type 80 strings of ten seven-letter words. In half the trials an irregularly spelt target word elicited a specific key press sequence of either the left or right index finger. In the other trials regularly spelt target words elicited the same key press sequence. An auditory memory load was added in half the trials by asking participants to remember the pitch of a tone during task performance. Orthographical irregularity was expected to slow down performance. Auditory memory load, viewed as a low level stressor, was expected to affect performance only when orthographically irregular words needed to be typed. The hypotheses were confirmed. Additional analysis showed differential effects on the left and right hand, possibly related to verbal-manual interference and hand dominance. The results are discussed in relation to relevant findings of recent neuroimaging studies.

  7. Working memory capacity and visual-verbal cognitive load modulate auditory-sensory gating in the brainstem: toward a unified view of attention.

    PubMed

    Sörqvist, Patrik; Stenfelt, Stefan; Rönnberg, Jerker

    2012-11-01

    Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.

  8. Allocation of Attentional Resources toward a Secondary Cognitive Task Leads to Compromised Ankle Proprioceptive Performance in Healthy Young Adults

    PubMed Central

    Yasuda, Kazuhiro; Iimura, Naoyuki; Iwata, Hiroyasu

    2014-01-01

    The objective of the present study was to determine whether increased attentional demands influence the assessment of ankle joint proprioceptive ability in young adults. We used a dual-task condition, in which participants performed an ankle ipsilateral position-matching task with and without a secondary serial auditory subtraction task during target angle encoding. Two experiments were performed with two different cohorts: one in which the auditory subtraction task was easy (experiment 1a) and one in which it was difficult (experiment 1b). The results showed that, compared with the single-task condition, participants had higher absolute error under dual-task conditions in experiment 1b. The reduction in position-matching accuracy with an attentionally demanding cognitive task suggests that allocation of attentional resources toward a difficult second task can lead to compromised ankle proprioceptive performance. Therefore, these findings indicate that the difficulty level of the cognitive task might be the possible critical factor that decreased accuracy of position-matching task. We conclude that increased attentional demand with difficult cognitive task does influence the assessment of ankle joint proprioceptive ability in young adults when measured using an ankle ipsilateral position-matching task. PMID:24523966

  9. Towards an understanding of the mechanisms of weak central coherence effects: experiments in visual configural learning and auditory perception.

    PubMed Central

    Plaisted, Kate; Saksida, Lisa; Alcántara, José; Weisblatt, Emma

    2003-01-01

    The weak central coherence hypothesis of Frith is one of the most prominent theories concerning the abnormal performance of individuals with autism on tasks that involve local and global processing. Individuals with autism often outperform matched nonautistic individuals on tasks in which success depends upon processing of local features, and underperform on tasks that require global processing. We review those studies that have been unable to identify the locus of the mechanisms that may be responsible for weak central coherence effects and those that show that local processing is enhanced in autism but not at the expense of global processing. In the light of these studies, we propose that the mechanisms which can give rise to 'weak central coherence' effects may be perceptual. More specifically, we propose that perception operates to enhance the representation of individual perceptual features but that this does not impact adversely on representations that involve integration of features. This proposal was supported in the two experiments we report on configural and feature discrimination learning in high-functioning children with autism. We also examined processes of perception directly, in an auditory filtering task which measured the width of auditory filters in individuals with autism and found that the width of auditory filters in autism were abnormally broad. We consider the implications of these findings for perceptual theories of the mechanisms underpinning weak central coherence effects. PMID:12639334

  10. fMRI Mapping of Brain Activity Associated with the Vocal Production of Consonant and Dissonant Intervals.

    PubMed

    González-García, Nadia; Rendón, Pablo L

    2017-05-23

    The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.

  11. Inverted-U Function Relating Cortical Plasticity and Task Difficulty

    PubMed Central

    Engineer, Navzer D.; Engineer, Crystal T.; Reed, Amanda C.; Pandya, Pritesh K.; Jakkamsetti, Vikram; Moucha, Raluca; Kilgard, Michael P.

    2012-01-01

    Many psychological and physiological studies with simple stimuli have suggested that perceptual learning specifically enhances the response of primary sensory cortex to task-relevant stimuli. The aim of this study was to determine whether auditory discrimination training on complex tasks enhances primary auditory cortex responses to a target sequence relative to non-target and novel sequences. We collected responses from more than 2,000 sites in 31 rats trained on one of six discrimination tasks that differed primarily in the similarity of the target and distractor sequences. Unlike training with simple stimuli, long-term training with complex stimuli did not generate target specific enhancement in any of the groups. Instead, cortical receptive field size decreased, latency decreased, and paired pulse depression decreased in rats trained on the tasks of intermediate difficulty while tasks that were too easy or too difficult either did not alter or degraded cortical responses. These results suggest an inverted-U function relating neural plasticity and task difficulty. PMID:22249158

  12. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders.

    PubMed

    Choi, HeeSun; Geden, Michael; Feng, Jing

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct.

  13. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders

    PubMed Central

    Choi, HeeSun; Geden, Michael

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct. PMID:29240817

  14. Neural effects of cognitive control load on auditory selective attention.

    PubMed

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali

    2014-08-01

    Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Neuronal correlates of visual and auditory alertness in the DMT and ketamine model of psychosis.

    PubMed

    Daumann, J; Wagner, D; Heekeren, K; Neukirch, A; Thiel, C M; Gouzoulis-Mayfrank, E

    2010-10-01

    Deficits in attentional functions belong to the core cognitive symptoms in schizophrenic patients. Alertness is a nonselective attention component that refers to a state of general readiness that improves stimulus processing and response initiation. The main goal of the present study was to investigate cerebral correlates of alertness in the human 5HT(2A) agonist and N-methyl-D-aspartic acid (NMDA) antagonist model of psychosis. Fourteen healthy volunteers participated in a randomized double-blind, cross-over event-related functional magnetic resonance imaging (fMRI) study with dimethyltryptamine (DMT) and S-ketamine. A target detection task with cued and uncued trials in both the visual and the auditory modality was used. Administration of DMT led to decreased blood oxygenation level-dependent response during performance of an alertness task, particularly in extrastriate regions during visual alerting and in temporal regions during auditory alerting. In general, the effects for the visual modality were more pronounced. In contrast, administration of S-ketamine led to increased cortical activation in the left insula and precentral gyrus in the auditory modality. The results of the present study might deliver more insight into potential differences and overlapping pathomechanisms in schizophrenia. These conclusions must remain preliminary and should be explored by further fMRI studies with schizophrenic patients performing modality-specific alertness tasks.

  16. The Staggered Spondaic Word Test. A ten-minute look at the central nervous system through the ears.

    PubMed

    Katz, J; Smith, P S

    1991-01-01

    We have described three major groupings that encompass most auditory processing difficulties. While the problems may be superimposed upon one another in any individual client, each diagnostic sign is closely associated with particular communication and learning disorders. In addition, these behaviors may be related back to the functional anatomy of the regions that are implicated by the SSW test. The auditory-decoding group is deficient in rapid analysis of speech. The vagueness of speech sound knowledge is thought to lead to auditory misunderstanding and confusion. In early life, this may be reflected in the child's articulation. Poor phonic skills that result from this deficit are thought to contribute to their limited reading and spelling abilities. The auditory tolerance-fading memory group is often thought to have severe auditory-processing problems because those in it are highly distracted by background sounds and have poor auditory memories. However, school performance is not far from grade level, and the resulting reading disabilities stem more from limited comprehension than from an inability to sound out the words. Distractibility and poor auditory memory could contribute to the apparent weakness in reading comprehension. Many of the characteristics of the auditory tolerance-fading memory group are similar to those of attention deficit disorder cases. Both groups are associated anatomically with the AC region. The auditory integration cases can be divided into two subgroups. In the first, the subjects exhibit the most severe reading and spelling problems of the three major categories. These individuals closely resemble the classical dyslexics. We presume that this disorder represents a major disruption in auditory-visual integration. The second subgroup has much less severe learning difficulties, which closely follow the pattern of dysfunction of the auditory tolerance-fading memory group. The excellent physiological procedures to which we have been exposed during this Windows on the Brain conference provide a glimpse of the exciting possibilities for studying brain function. However, in working with individuals who have cognitive impairments, the new technology should be validated by standard behavioral tests. In turn, the new techniques will provide those who use behavioral measures with new parameters and concepts to broaden our understanding. For the past quarter of a century, the SSW test has been compared with other behavioral, physiological, and anatomical procedures. Based on the information that has been assembled, we have been able to classify auditory processing disorders into three major categories.(ABSTRACT TRUNCATED AT 400 WORDS)

  17. Speech processing in children with functional articulation disorders.

    PubMed

    Gósy, Mária; Horváth, Viktória

    2015-03-01

    This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.

  18. Evaluating the Precision of Auditory Sensory Memory as an Index of Intrusion in Tinnitus.

    PubMed

    Barrett, Doug J K; Pilling, Michael

    The purpose of this study was to investigate the potential of measures of auditory short-term memory (ASTM) to provide a clinical measure of intrusion in tinnitus. Response functions for six normal listeners on a delayed pitch discrimination task were contrasted in three conditions designed to manipulate attention in the presence and absence of simulated tinnitus: (1) no-tinnitus, (2) ignore-tinnitus, and (3) attend-tinnitus. Delayed pitch discrimination functions were more variable in the presence of simulated tinnitus when listeners were asked to divide attention between the primary task and the amplitude of the tinnitus tone. Changes in the variability of auditory short-term memory may provide a novel means of quantifying the level of intrusion associated with the tinnitus percept during listening.

  19. [Effect of sound amplification on parent's communicative modalities].

    PubMed

    Couto, Maria Inês Vieira; Lichtig, Ida

    2007-01-01

    auditory rehabilitation in deaf children users of sign language. to verify the effects of sound amplification on parent's communicative modalities when interacting with their deaf children. participants were twelve deaf children, aged 50 to 80 months and their hearing parents. Children had severe or profound hearing loss in their better ear and were fitted with hearing aids in both ears. Children communicated preferably through sign language. The cause-effect relation between the children's auditory skills profile (insertion gain, functional gain and The Meaningful Auditory Integration Scale--MAIS) and the communicative modalities (auditive-oral, visuo-spacial, bimodal) used by parents was analyzed. Communicative modalities were compared in two different experimental situations during a structured interaction between parents and children, i.e. when children were not fitted with their hearing aids (Situation 1) and when children were fitted with them (Situation 2). Data was analyzed using descriptive statistics. the profile of the deaf children's auditory skills demonstrated to be lower than 53% (unsatisfactory). Parents used predominately the bimodal modality to gain children's attention, to transmit and to end tasks. A slight positive effect of sound amplification on the communicative modalities was observed, once parents presented more turn-takings during communication when using the auditory-oral modality in Situation 2. hearing parents tend to use more turn-takings during communication in the auditory-oral modality to gain children's attention, to transmit and to end tasks, since they observe an improvement in the auditory skills of their children.

  20. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  1. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music

    PubMed Central

    Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.

    2018-01-01

    Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861

  2. Comparing Auditory Noise Treatment with Stimulant Medication on Cognitive Task Performance in Children with Attention Deficit Hyperactivity Disorder: Results from a Pilot Study.

    PubMed

    Söderlund, Göran B W; Björk, Christer; Gustafsson, Peik

    2016-01-01

    Recent research has shown that acoustic white noise (80 dB) can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD). This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC) took the same tests as a comparison. Twenty children with ADHD of combined or inattentive subtypes and twenty TDC matched for age and gender performed three different tests (word recall, spanboard and n-back task) during exposure to white noise (80 dB) and in a silent condition. The ADHD children were tested with and without central stimulant medication. In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks. This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.

  3. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  4. Functional anatomic studies of memory retrieval for auditory words and visual pictures.

    PubMed

    Buckner, R L; Raichle, M E; Miezin, F M; Petersen, S E

    1996-10-01

    Functional neuroimaging with positron emission tomography was used to study brain areas activated during memory retrieval. Subjects (n = 15) recalled items from a recent study episode (episodic memory) during two paired-associate recall tasks. The tasks differed in that PICTURE RECALL required pictorial retrieval, whereas AUDITORY WORD RECALL required word retrieval. Word REPETITION and REST served as two reference tasks. Comparing recall with repetition revealed the following observations. (1) Right anterior prefrontal activation (similar to that seen in several previous experiments), in addition to bilateral frontal-opercular and anterior cingulate activations. (2) An anterior subdivision of medial frontal cortex [pre-supplementary motor area (SMA)] was activated, which could be dissociated from a more posterior area (SMA proper). (3) Parietal areas were activated, including a posterior medial area near precuneus, that could be dissociated from an anterior parietal area that was deactivated. (4) Multiple medial and lateral cerebellar areas were activated. Comparing recall with rest revealed similar activations, except right prefrontal activation was minimal and activations related to motor and auditory demands became apparent (e.g., bilateral motor and temporal cortex). Directly comparing picture recall with auditory word recall revealed few notable activations. Taken together, these findings suggest a pathway that is commonly used during the episodic retrieval of picture and word stimuli under these conditions. Many areas in this pathway overlap with areas previously activated by a different set of retrieval tasks using stem-cued recall, demonstrating their generality. Examination of activations within individual subjects in relation to structural magnetic resonance images provided an-atomic information about the location of these activations. Such data, when combined with the dissociations between functional areas, provide an increasingly detailed picture of the brain pathways involved in episodic retrieval tasks.

  5. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  6. The musician effect: does it persist under degraded pitch conditions of cochlear implant simulations?

    PubMed Central

    Fuller, Christina D.; Galvin, John J.; Maat, Bert; Free, Rolien H.; Başkent, Deniz

    2014-01-01

    Cochlear implants (CIs) are auditory prostheses that restore hearing via electrical stimulation of the auditory nerve. Compared to normal acoustic hearing, sounds transmitted through the CI are spectro-temporally degraded, causing difficulties in challenging listening tasks such as speech intelligibility in noise and perception of music. In normal hearing (NH), musicians have been shown to better perform than non-musicians in auditory processing and perception, especially for challenging listening tasks. This “musician effect” was attributed to better processing of pitch cues, as well as better overall auditory cognitive functioning in musicians. Does the musician effect persist when pitch cues are degraded, as it would be in signals transmitted through a CI? To answer this question, NH musicians and non-musicians were tested while listening to unprocessed signals or to signals processed by an acoustic CI simulation. The task increasingly depended on pitch perception: (1) speech intelligibility (words and sentences) in quiet or in noise, (2) vocal emotion identification, and (3) melodic contour identification (MCI). For speech perception, there was no musician effect with the unprocessed stimuli, and a small musician effect only for word identification in one noise condition, in the CI simulation. For emotion identification, there was a small musician effect for both. For MCI, there was a large musician effect for both. Overall, the effect was stronger as the importance of pitch in the listening task increased. This suggests that the musician effect may be more rooted in pitch perception, rather than in a global advantage in cognitive processing (in which musicians would have performed better in all tasks). The results further suggest that musical training before (and possibly after) implantation might offer some advantage in pitch processing that could partially benefit speech perception, and more strongly emotion and music perception. PMID:25071428

  7. Auditory Scene Analysis: An Attention Perspective

    ERIC Educational Resources Information Center

    Sussman, Elyse S.

    2017-01-01

    Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…

  8. Affective Priming with Auditory Speech Stimuli

    ERIC Educational Resources Information Center

    Degner, Juliane

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…

  9. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    PubMed

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  10. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Effects of practice on interference from an auditory task while driving : a simulation study

    DOT National Transportation Integrated Search

    2004-12-01

    Experimental research on the effects of cellular phone conversations on driving indicates that the phone task interferes with many driving-related functions, especially with older drivers. Limitations of past research have been that (1) the dual task...

  12. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  13. Beyond the audiogram: application of models of auditory fitness for duty to assess communication in the real world.

    PubMed

    Dubno, Judy R

    2018-05-01

    This manuscript provides a Commentary on a paper published in the current issue of the International Journal of Audiology and the companion paper published in Ear and Hearing by Soli et al. These papers report background, rationale and results of a novel modelling approach to assess "auditory fitness for duty," or an individual's ability to perform hearing-critical tasks related to their job, based on their likelihood of effective speech communication in the listening environment in which the task is performed.

  14. A sound advantage: Increased auditory capacity in autism.

    PubMed

    Remington, Anna; Fairnie, Jake

    2017-09-01

    Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms.

    PubMed

    Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C

    2013-11-01

    Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Effects of Auditory Distraction on Cognitive Processing of Young Adults

    ERIC Educational Resources Information Center

    LaPointe, Leonard L.; Heald, Gary R.; Stierwalt, Julie A. G.; Kemker, Brett E.; Maurice, Trisha

    2007-01-01

    Objective: The effects of interference, competition, and distraction on cognitive processing are unclearly understood, particularly regarding type and intensity of auditory distraction across a variety of cognitive processing tasks. Method: The purpose of this investigation was to report two experiments that sought to explore the effects of types…

  17. Attenuated Auditory Event-Related Potentials and Associations with Atypical Sensory Response Patterns in Children with Autism

    ERIC Educational Resources Information Center

    Donkers, Franc C. L.; Schipul, Sarah E.; Baranek, Grace T.; Cleary, Katherine M.; Willoughby, Michael T.; Evans, Anna M.; Bulluck, John C.; Lovmo, Jeanne E.; Belger, Aysenil

    2015-01-01

    Neurobiological underpinnings of unusual sensory features in individuals with autism are unknown. Event-related potentials elicited by task-irrelevant sounds were used to elucidate neural correlates of auditory processing and associations with three common sensory response patterns (hyperresponsiveness; hyporesponsiveness; sensory seeking).…

  18. Auditory Word Serial Recall Benefits from Orthographic Dissimilarity

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Lafontaine, Helene; Morais, Jose; Kolinsky, Regine

    2010-01-01

    The influence of orthographic knowledge has been consistently observed in speech recognition and metaphonological tasks. The present study provides data suggesting that such influence also pervades other cognitive domains related to language abilities, such as verbal working memory. Using serial recall of auditory seven-word lists, we observed…

  19. Trial-to-Trial Carryover in Auditory Short-Term Memory

    ERIC Educational Resources Information Center

    Visscher, Kristina M.; Kahana, Michael J.; Sekuler, Robert

    2009-01-01

    Using a short-term recognition memory task, the authors evaluated the carryover across trials of 2 types of auditory information: the characteristics of individual study sounds (item information) and the relationships between the study sounds (study set homogeneity). On each trial, subjects heard 2 successive broadband study sounds and then…

  20. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  1. Developmental Trends in Recall of Central and Incidental Auditory

    ERIC Educational Resources Information Center

    Hallahan, Daniel P.; And Others

    1974-01-01

    An auditory recall task involving central and incidental stimuli designed to correspond to processes used in selective attention, was presented to elementary school students. Older children and girls performed better than younger children and boys, especially when animals were the relevant and food the irrelevant stimuli. (DP)

  2. Reading Spoken Words: Orthographic Effects in Auditory Priming

    ERIC Educational Resources Information Center

    Chereau, Celine; Gaskell, M. Gareth; Dumay, Nicolas

    2007-01-01

    Three experiments examined the involvement of orthography in spoken word processing using a task--unimodal auditory priming with offset overlap--taken to reflect activation of prelexical representations. Two types of prime-target relationship were compared; both involved phonological overlap, but only one had a strong orthographic overlap (e.g.,…

  3. Information-Processing Modules and Their Relative Modality Specificity

    ERIC Educational Resources Information Center

    Anderson, John R.; Qin, Yulin; Jung, Kwan-Jin; Carter, Cameron S.

    2007-01-01

    This research uses fMRI to understand the role of eight cortical regions in a relatively complex information-processing task. Modality of input (visual versus auditory) and modality of output (manual versus vocal) are manipulated. Two perceptual regions (auditory cortex and fusiform gyrus) only reflected perceptual encoding. Two motor regions were…

  4. A MEG Investigation of Single-Word Auditory Comprehension in Aphasia

    ERIC Educational Resources Information Center

    Zipse, Lauryn; Kearns, Kevin; Nicholas, Marjorie; Marantz, Alec

    2011-01-01

    Purpose: To explore whether individuals with aphasia exhibit differences in the M350, an electrophysiological marker of lexical activation, compared with healthy controls. Method: Seven people with aphasia, 9 age-matched controls, and 10 younger controls completed an auditory lexical decision task while cortical activity was recorded with…

  5. Neurophysiological and Behavioral Responses of Mandarin Lexical Tone Processing

    PubMed Central

    Yu, Yan H.; Shafer, Valerie L.; Sussman, Elyse S.

    2017-01-01

    Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs. These ISI differences were used to test whether cross-linguistic differences in processing Mandarin lexical tone are a function of differences in acoustic resolution and/or differences in long-term memory representations. Bisyllabic nonword tokens that differed in lexical tone categories were presented using a passive listening multiple oddball paradigm. Behavioral discrimination and identification data were also collected. The ERP results revealed robust MMNs to both easy and difficult lexical tone differences for both groups at short ISIs. At long ISIs, there was either no change or an enhanced MMN amplitude for the Mandarin group, but reduced MMN amplitude for the English group. In addition, the Mandarin listeners showed a larger late negativity (LN) discriminative response than the English listeners for lexical tone contrasts in the long ISI condition. Mandarin speakers outperformed English speakers in the behavioral tasks, especially under the long ISI conditions with the more similar lexical tone pair. These results suggest that the acoustic correlates of lexical tone are fairly robust and easily discriminated at short ISIs, when the auditory sensory memory trace is strong. At longer ISIs beyond 2.5 s language-specific experience is necessary for robust discrimination. PMID:28321179

  6. Increased experience amplifies the activation of task-irrelevant category representations.

    PubMed

    Wu, Rachel; Pruitt, Zoe; Zinszer, Benjamin D; Cheung, Olivia S

    2017-02-01

    Prior research has demonstrated the benefits (i.e., task-relevant attentional selection) and costs (i.e., task-irrelevant attentional capture) of prior knowledge on search for an individual target or multiple targets from a category. This study investigated whether the level of experience with particular categories predicts the degree of task-relevant and task-irrelevant activation of item and category representations. Adults with varying levels of dieting experience (measured via 3 subscales of Disinhibition, Restraint, Hunger; Stunkard & Messick, Journal of Psychosomatic Research, 29(1), 71-83, 1985) searched for targets defined as either a specific food item (e.g., carrots), or a category (i.e., any healthy or unhealthy food item). Apart from the target-present trials, in the target-absent "foil" trials, when searching for a specific item (e.g., carrots), irrelevant items from the target's category (e.g., squash) were presented. The ERP (N2pc) results revealed that the activation of task-relevant representations (measured via Exemplar and Category N2pc amplitudes) did not differ based on the degree of experience. Critically, however, increased dieting experience, as revealed by lower Disinhibition scores, predicted activation of task-irrelevant representations (i.e., attentional capture of foils from the target item category). Our results suggest that increased experience with particular categories encourages the rapid activation of category representations even when category information is task irrelevant, and that the N2pc in foil trials could potentially serve as an indication of experience level in future studies on categorization.

  7. It's about time: revisiting temporal processing deficits in dyslexia.

    PubMed

    Casini, Laurence; Pech-Georgel, Catherine; Ziegler, Johannes C

    2018-03-01

    Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non-temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the 'internal clock' of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia. © 2017 John Wiley & Sons Ltd.

  8. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.

  9. Chromatic Perceptual Learning but No Category Effects without Linguistic Input

    PubMed Central

    Grandison, Alexandra; Sowden, Paul T.; Drivonikou, Vicky G.; Notman, Leslie A.; Alexander, Iona; Davies, Ian R. L.

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest. PMID:27252669

  10. Auditory working memory impairments in individuals at familial high risk for schizophrenia.

    PubMed

    Seidman, Larry J; Meyer, Eric C; Giuliano, Anthony J; Breiter, Hans C; Goldstein, Jill M; Kremen, William S; Thermenos, Heidi W; Toomey, Rosemary; Stone, William S; Tsuang, Ming T; Faraone, Stephen V

    2012-05-01

    The search for predictors of schizophrenia has accelerated with a growing focus on early intervention and prevention of psychotic illness. Studying nonpsychotic relatives of individuals with schizophrenia enables identification of markers of vulnerability for the illness independent of confounds associated with psychosis. The goal of these studies was to develop new auditory continuous performance tests (ACPTs) and evaluate their effects in individuals with schizophrenia and their relatives. We carried out two studies of auditory vigilance with tasks involving working memory (WM) and interference control with increasing levels of cognitive load to discern the information-processing vulnerabilities in a sample of schizophrenia patients, and two samples of nonpsychotic relatives of individuals with schizophrenia and controls. Study 1 assessed adults (mean age = 41), and Study 2 assessed teenagers and young adults age 13-25 (M = 19). Patients with schizophrenia were impaired on all five versions of the ACPTs, whereas relatives were impaired only on WM tasks, particularly the two interference tasks that maximize cognitive load. Across all groups, the interference tasks were more difficult to perform than the other tasks. Schizophrenia patients performed worse than relatives, who performed worse than controls. For patients, the effect sizes were large (Cohen's d = 1.5), whereas for relatives they were moderate (d = ~0.40-0.50). There was no age by group interaction in the relatives-control comparison except for participants <31 years of age. Novel WM tasks that manipulate cognitive load and interference control index an important component of the vulnerability to schizophrenia.

  11. Distraction and task engagement: How interesting and boring information impact driving performance and subjective and physiological responses.

    PubMed

    Horrey, William J; Lesch, Mary F; Garabet, Angela; Simmons, Lucinda; Maikala, Rammohan

    2017-01-01

    As more devices and services are integrated into vehicles, drivers face new opportunities to perform additional tasks while driving. While many studies have explored the detrimental effects of varying task demands on driving performance, there has been little attention devoted to tasks that vary in terms of personal interest or investment-a quality we liken to the concept of task engagement. The purpose of this study was to explore the impact of task engagement on driving performance, subjective appraisals of performance and workload, and various physiological measurements. In this study, 31 participants (M = 37 yrs) completed three driving conditions in a driving simulator: listening to boring auditory material; listening to interesting material; and driving with no auditory material. Drivers were simultaneously monitored using near-infrared spectroscopy, heart monitoring and eye tracking systems. Drivers exhibited less variability in lane keeping and headway maintenance for both auditory conditions; however, response times to critical braking events were longer in the interesting audio condition. Drivers also perceived the interesting material to be less demanding and less complex, although the material was objectively matched for difficulty. Drivers showed a reduced concentration of cerebral oxygenated hemoglobin when listening to interesting material, compared to baseline and boring conditions, yet they exhibited superior recognition for this material. The practical implications, from a safety standpoint, are discussed. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Psychopathic traits associated with abnormal hemodynamic activity in salience and default mode networks during auditory oddball task.

    PubMed

    Anderson, Nathaniel E; Maurer, J Michael; Steele, Vaughn R; Kiehl, Kent A

    2018-06-01

    Psychopathy is a personality disorder accompanied by abnormalities in emotional processing and attention. Recent theoretical applications of network-based models of cognition have been used to explain the diverse range of abnormalities apparent in psychopathy. Still, the physiological basis for these abnormalities is not well understood. A significant body of work has examined psychopathy-related abnormalities in simple attention-based tasks, but these studies have largely been performed using electrocortical measures, such as event-related potentials (ERPs), and they often have been carried out among individuals with low levels of psychopathic traits. In this study, we examined neural activity during an auditory oddball task using functional magnetic resonance imaging (fMRI) during a simple auditory target detection (oddball) task among 168 incarcerated adult males, with psychopathic traits assessed via the Hare Psychopathy Checklist-Revised (PCL-R). Event-related contrasts demonstrated that the largest psychopathy-related effects were apparent between the frequent standard stimulus condition and a task-off, implicit baseline. Negative correlations with interpersonal-affective dimensions (Factor 1) of the PCL-R were apparent in regions comprising default mode and salience networks. These findings support models of psychopathy describing impaired integration across functional networks. They additionally corroborate reports which have implicated failures of efficient transition between default mode and task-positive networks. Finally, they demonstrate a neurophysiological basis for abnormal mobilization of attention and reduced engagement with stimuli that have little motivational significance among those with high psychopathic traits.

  13. Auditory psychophysics and perception.

    PubMed

    Hirsh, I J; Watson, C S

    1996-01-01

    In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.

  14. Event-related potentials and secondary task performance during simulated driving.

    PubMed

    Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L

    2008-01-01

    Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.

  15. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning.

    PubMed

    de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek

    2018-05-01

    The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.

  16. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    PubMed

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  17. Water Immersion Affects Episodic Memory and Postural Control in Healthy Older Adults.

    PubMed

    Bressel, Eadric; Louder, Talin J; Raikes, Adam C; Alphonsa, Sushma; Kyvelidou, Anastasia

    2018-05-04

    Previous research has reported that younger adults make fewer cognitive errors on an auditory vigilance task while in chest-deep water compared with on land. The purpose of this study was to extend this previous work to include older adults and to examine the effect of environment (water vs land) on linear and nonlinear measures of postural control under single- and dual-task conditions. Twenty-one older adult participants (age = 71.6 ± 8.34 years) performed a cognitive (auditory vigilance) and motor (standing balance) task separately and simultaneously on land and in chest-deep water. Listening errors (n = count) from the auditory vigilance test and sample entropy (SampEn), center of pressure area, and velocity for the balance test served as dependent measures. Environment (land vs water) and task (single vs dual) comparisons were made with a Wilcoxon matched-pair test. Listening errors were 111% greater during land than during water environments (single-task = 4.0 ± 3.5 vs 1.9 ± 1.7; P = .03). Conversely, SampEn values were 100% greater during water than during land environments (single-task = 0.04 ± 0.01 vs 0.02 ± 0.01; P < .001). Center of pressure area and velocity followed a similar trend to SampEn with respect to environment differences, and none of the measures were different between single- and dual-task conditions (P > .05). The findings of this study expand current support for the potential use of partial aquatic immersion as a viable method for challenging both cognitive and motor abilities in older adults.

  18. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  19. Auditory decision aiding in supervisory control of multiple unmanned aerial vehicles.

    PubMed

    Donmez, Birsen; Cummings, M L; Graham, Hudson D

    2009-10-01

    This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.

  20. The ability for cocaine and cocaine-associated cues to compete for attention

    PubMed Central

    Pitchers, Kyle K.; Wood, Taylor R.; Skrzynski, Cari J.; Robinson, Terry E.; Sarter, Martin

    2017-01-01

    In humans, reward cues, including drug cues in addicts, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. PMID:27890441

  1. Spectra-temporal patterns underlying mental addition: an ERP and ERD/ERS study.

    PubMed

    Ku, Yixuan; Hong, Bo; Gao, Xiaorong; Gao, Shangkai

    2010-03-12

    Functional neuroimaging data have shown that mental calculation involves fronto-parietal areas that are composed of different subsystems shared with other cognitive functions such as working memory and language. Event-related potential (ERP) analysis has also indicated sequential information changes during the calculation process. However, little is known about the dynamic properties of oscillatory networks in this process. In the present study, we applied both ERP and event-related (de-)synchronization (ERS/ERD) analyses to EEG data recorded from normal human subjects performing tasks for sequential visual/auditory mental addition. Results in the study indicate that the late positive components (LPCs) can be decomposed into two separate parts. The earlier element LPC1 (around 360ms) reflects the computing attribute and is more prominent in calculation tasks. The later element LPC2 (around 590ms) indicates an effect of number size and appears larger only in a more complex 2-digit addition task. The theta ERS and alpha ERD show modality-independent frontal and parietal differential patterns between the mental addition and control groups, and discrepancies are noted in the beta ERD between the 2-digit and 1-digit mental addition groups. The 2-digit addition (both visual and auditory) results in similar beta ERD patterns to the auditory control, which may indicate a reliance on auditory-related resources in mental arithmetic, especially with increasing task difficulty. These results coincide with the theory of simple calculation relying on the visuospatial process and complex calculation depending on the phonological process. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Speech Discrimination Difficulties in High-Functioning Autism Spectrum Disorder Are Likely Independent of Auditory Hypersensitivity

    PubMed Central

    Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh

    2016-01-01

    Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814

  3. Non-linear Relationship between BOLD Activation and Amplitude of Beta Oscillations in the Supplementary Motor Area during Rhythmic Finger Tapping and Internal Timing.

    PubMed

    Gompf, Florian; Pflug, Anja; Laufs, Helmut; Kell, Christian A

    2017-01-01

    Functional imaging studies using BOLD contrasts have consistently reported activation of the supplementary motor area (SMA) both during motor and internal timing tasks. Opposing findings, however, have been shown for the modulation of beta oscillations in the SMA. While movement suppresses beta oscillations in the SMA, motor and non-motor tasks that rely on internal timing increase the amplitude of beta oscillations in the SMA. These independent observations suggest that the relationship between beta oscillations and BOLD activation is more complex than previously thought. Here we set out to investigate this rapport by examining beta oscillations in the SMA during movement with varying degrees of internal timing demands. In a simultaneous EEG-fMRI experiment, 20 healthy right-handed subjects performed an auditory-paced finger-tapping task. Internal timing was operationalized by including conditions with taps on every fourth auditory beat, which necessitates generation of a slow internal rhythm, while tapping to every auditory beat reflected simple auditory-motor synchronization. In the SMA, BOLD activity increased and power in both the low and the high beta band decreased expectedly during each condition compared to baseline. Internal timing was associated with a reduced desynchronization of low beta oscillations compared to conditions without internal timing demands. In parallel with this relative beta power increase, internal timing activated the SMA more strongly in terms of BOLD. This documents a task-dependent non-linear relationship between BOLD and beta-oscillations in the SMA. We discuss different roles of beta synchronization and desynchronization in active processing within the same cortical region.

  4. Age of acquisition affects the retrieval of grammatical category information.

    PubMed

    Bai, Lili; Ma, Tengfei; Dunlap, Susan; Chen, Baoguo

    2013-01-01

    This study investigated age of acquisition (AoA) effects on processing grammatical category information of Chinese single-character words. In Experiment 1, nouns and verbs that were acquired at different ages were used as materials in a grammatical category decision task. Results showed that the grammatical category information of earlier acquired nouns and verbs was easier to retrieve. In Experiment 2, AoA and predictability from orthography to grammatical category were manipulated in a grammatical category decision task. Results showed larger AoA effects under lower predictability conditions. In Experiment 3, a semantic category decision task was used with the same materials as those in Experiment 2. Different results were found from Experiment 2, suggesting that the grammatical category decision task is not merely the same as the semantic category decision task, but rather involves additional processing of grammatical category information. Therefore the conclusions of Experiments 1 and 2 were strengthened. In summary, it was found for the first time that AoA affects the retrieval of grammatical category information, thus providing new evidence in support of the arbitrary mapping hypothesis.

  5. Integration of auditory and somatosensory error signals in the neural control of speech movements.

    PubMed

    Feng, Yongqiang; Gracco, Vincent L; Max, Ludo

    2011-08-01

    We investigated auditory and somatosensory feedback contributions to the neural control of speech. In task I, sensorimotor adaptation was studied by perturbing one of these sensory modalities or both modalities simultaneously. The first formant (F1) frequency in the auditory feedback was shifted up by a real-time processor and/or the extent of jaw opening was increased or decreased with a force field applied by a robotic device. All eight subjects lowered F1 to compensate for the up-shifted F1 in the feedback signal regardless of whether or not the jaw was perturbed. Adaptive changes in subjects' acoustic output resulted from adjustments in articulatory movements of the jaw or tongue. Adaptation in jaw opening extent in response to the mechanical perturbation occurred only when no auditory feedback perturbation was applied or when the direction of adaptation to the force was compatible with the direction of adaptation to a simultaneous acoustic perturbation. In tasks II and III, subjects' auditory and somatosensory precision and accuracy were estimated. Correlation analyses showed that the relationships 1) between F1 adaptation extent and auditory acuity for F1 and 2) between jaw position adaptation extent and somatosensory acuity for jaw position were weak and statistically not significant. Taken together, the combined findings from this work suggest that, in speech production, sensorimotor adaptation updates the underlying control mechanisms in such a way that the planning of vowel-related articulatory movements takes into account a complex integration of error signals from previous trials but likely with a dominant role for the auditory modality.

  6. Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training

    PubMed Central

    Zarate, Jean Mary; Delhommeau, Karine; Wood, Sean; Zatorre, Robert J.

    2010-01-01

    Background Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy. Methodology/Principal Findings We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing. Conclusions/Significance Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production. PMID:20567521

  7. Interactions of cognitive and auditory abilities in congenitally blind individuals.

    PubMed

    Rokem, Ariel; Ahissar, Merav

    2009-02-01

    Congenitally blind individuals have been found to show superior performance in perceptual and memory tasks. In the present study, we asked whether superior stimulus encoding could account for performance in memory tasks. We characterized the performance of a group of congenitally blind individuals on a series of auditory, memory and executive cognitive tasks and compared their performance to that of sighted controls matched for age, education and musical training. As expected, we found superior verbal spans among congenitally blind individuals. Moreover, we found superior speech perception, measured by resilience to noise, and superior auditory frequency discrimination. However, when memory span was measured under conditions of equivalent speech perception, by adjusting the signal to noise ratio for each individual to the same level of perceptual difficulty (80% correct), the advantage in memory span was completely eliminated. Moreover, blind individuals did not possess any advantage in cognitive executive functions, such as manipulation of items in memory and math abilities. We propose that the short-term memory advantage of blind individuals results from better stimulus encoding, rather than from superiority at subsequent processing stages.

  8. Laterality and unilateral deafness: Patients with congenital right ear deafness do not develop atypical language dominance.

    PubMed

    Van der Haegen, Lise; Acke, Frederic; Vingerhoets, Guy; Dhooge, Ingeborg; De Leenheer, Els; Cai, Qing; Brysbaert, Marc

    2016-12-01

    Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Effects of altered auditory feedback across effector systems: production of melodies by keyboard and singing.

    PubMed

    Pfordresher, Peter Q; Mantell, James T

    2012-01-01

    We report an experiment that tested whether effects of altered auditory feedback (AAF) during piano performance differ from its effects during singing. These effector systems differ with respect to the mapping between motor gestures and pitch content of auditory feedback. Whereas this action-effect mapping is highly reliable during phonation in any vocal motor task (singing or speaking), mapping between finger movements and pitch occurs only in limited situations, such as piano playing. Effects of AAF in both tasks replicated results previously found for keyboard performance (Pfordresher, 2003), in that asynchronous (delayed) feedback slowed timing whereas alterations to feedback pitch increased error rates, and the effect of asynchronous feedback was similar in magnitude across tasks. However, manipulations of feedback pitch had larger effects on singing than on keyboard production, suggesting effector-specific differences in sensitivity to action-effect mapping with respect to feedback content. These results support the view that disruption from AAF is based on abstract, effector independent, response-effect associations but that the strength of associations differs across effector systems. Copyright © 2011. Published by Elsevier B.V.

  10. Sensori-Motor Learning with Movement Sonification: Perspectives from Recent Interdisciplinary Studies.

    PubMed

    Bevilacqua, Frédéric; Boyer, Eric O; Françoise, Jules; Houix, Olivier; Susini, Patrick; Roby-Brami, Agnès; Hanneton, Sylvain

    2016-01-01

    This article reports on an interdisciplinary research project on movement sonification for sensori-motor learning. First, we describe different research fields which have contributed to movement sonification, from music technology including gesture-controlled sound synthesis, sonic interaction design, to research on sensori-motor learning with auditory-feedback. In particular, we propose to distinguish between sound-oriented tasks and movement-oriented tasks in experiments involving interactive sound feedback. We describe several research questions and recently published results on movement control, learning and perception. In particular, we studied the effect of the auditory feedback on movements considering several cases: from experiments on pointing and visuo-motor tracking to more complex tasks where interactive sound feedback can guide movements, or cases of sensory substitution where the auditory feedback can inform on object shapes. We also developed specific methodologies and technologies for designing the sonic feedback and movement sonification. We conclude with a discussion on key future research challenges in sensori-motor learning with movement sonification. We also point out toward promising applications such as rehabilitation, sport training or product design.

  11. Auditory and language development in Mandarin-speaking children after cochlear implantation.

    PubMed

    Lu, Xing; Qin, Zhaobing

    2018-04-01

    To evaluate early auditory performance, speech perception and language skills in Mandarin-speaking prelingual deaf children in the first two years after they received a cochlear implant (CI) and analyse the effects of possible associated factors. The Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS)/Meaningful Auditory Integration Scale (MAIS), Mandarin Early Speech Perception (MESP) test and Putonghua Communicative Development Inventory (PCDI) were used to assess auditory and language outcomes in 132 Mandarin-speaking children pre- and post-implantation. Children with CIs exhibited an ITMAIS/MAIS and PCDI developmental trajectory similar to that of children with normal hearing. The increased number of participants who achieved MESP categories 1-6 at each test interval showed a significant improvement in speech perception by paediatric CI recipients. Age at implantation and socioeconomic status were consistently associated with both auditory and language outcomes in the first two years post-implantation. Mandarin-speaking children with CIs exhibit significant improvements in early auditory and language development. Though these improvements followed the normative developmental trajectories, they still exhibited a gap compared with normative values. Earlier implantation and higher socioeconomic status are consistent predictors of greater auditory and language skills in the early stage. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    PubMed

    Li, Yuanqing; Wang, Guangyi; Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  13. Reproducibility and Discriminability of Brain Patterns of Semantic Categories Enhanced by Congruent Audiovisual Stimuli

    PubMed Central

    Long, Jinyi; Yu, Zhuliang; Huang, Biao; Li, Xiaojian; Yu, Tianyou; Liang, Changhong; Li, Zheng; Sun, Pei

    2011-01-01

    One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI) experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: “old people” and “young people.” These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG). The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration. PMID:21750692

  14. The origins of age of acquisition and typicality effects: Semantic processing in aphasia and the ageing brain.

    PubMed

    Räling, Romy; Schröder, Astrid; Wartenburger, Isabell

    2016-06-01

    Age of acquisition (AOA) has frequently been shown to influence response times and accuracy rates in word processing and constitutes a meaningful variable in aphasic language processing, while its origin in the language processing system is still under debate. To find out where AOA originates and whether and how it is related to another important psycholinguistic variable, namely semantic typicality (TYP), we studied healthy, elderly controls and semantically impaired individuals using semantic priming. For this purpose, we collected reaction times and accuracy rates as well as event-related potential data in an auditory category-member-verification task. The present results confirm a semantic origin of TYP, but question the same for AOA while favouring its origin at the phonology-semantics interface. The data are further interpreted in consideration of recent theories of ageing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Effects of total sleep deprivation on divided attention performance

    PubMed Central

    2017-01-01

    Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21–30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss. PMID:29166387

  16. Effects of total sleep deprivation on divided attention performance.

    PubMed

    Chua, Eric Chern-Pin; Fang, Eric; Gooley, Joshua J

    2017-01-01

    Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21-30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss.

  17. Dual-Task Crosstalk between Saccades and Manual Responses

    ERIC Educational Resources Information Center

    Huestegge, Lynn; Koch, Iring

    2009-01-01

    Between-task crosstalk has been discussed as an important source for dual-task costs. In this study, the authors examine concurrently performed saccades and manual responses as a means of studying the role of response-code conflict between 2 tasks. In Experiment 1, participants responded to an imperative auditory stimulus with a left or a right…

  18. Auditory Stream Segregation in Autism Spectrum Disorder: Benefits and Downsides of Superior Perceptual Processes

    ERIC Educational Resources Information Center

    Bouvet, Lucie; Mottron, Laurent; Valdois, Sylviane; Donnadieu, Sophie

    2016-01-01

    Auditory stream segregation allows us to organize our sound environment, by focusing on specific information and ignoring what is unimportant. One previous study reported difficulty in stream segregation ability in children with Asperger syndrome. In order to investigate this question further, we used an interleaved melody recognition task with…

  19. Attentional Capture by Deviant Sounds: A Noncontingent Form of Auditory Distraction?

    ERIC Educational Resources Information Center

    Vachon, François; Labonté, Katherine; Marsh, John E.

    2017-01-01

    The occurrence of an unexpected, infrequent sound in an otherwise homogeneous auditory background tends to disrupt the ongoing cognitive task. This "deviation effect" is typically explained in terms of attentional capture whereby the deviant sound draws attention away from the focal activity, regardless of the nature of this activity.…

  20. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

Top